Bayesian inference with reliability methods without knowing the maximum of the likelihood function

Size: px
Start display at page:

Download "Bayesian inference with reliability methods without knowing the maximum of the likelihood function"

Transcription

1 Bayesian inference with reliaility methods without knowing the maximum of the likelihood function Wolfgang Betz a,, James L. Beck, Iason Papaioannou a, Daniel Strau a a Engineering Risk Analysis Group, Technische Universität München, 8333 München, Germany Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 95 Astract In the BUS (Bayesian Updating with Structural reliaility methods approach, the uncertain parameter space is augmented y a uniform random variale and the Bayesian inference prolem is interpreted as a structural reliaility prolem. A posterior sample is given y an augmented vector sample within the failure domain of the structural reliaility prolem where the realization of the uniform random variale is smaller than the likelihood function scaled y a constant c. The constant c must e selected such that /c is larger or equal than the maximum of the likelihood function, which, however, is typically unknown a-priori. For BUS comined with sampling ased reliaility methods, choosing c too small has a negative impact on the computational efficiency. To overcome the prolem of selecting c, we propose a post-processing step for BUS that returns an uniased estimate for the evidence and samples from the posterior distriution, even if /c is selected smaller than the maximum of the likelihood function. The applicaility of the proposed post-processing step is demonstrated y means of rejection sampling. However, it can e comined with any structural reliaility method applied within the BUS framework. Keywords: Bayesian updating, Bayesian model class selection, rejection sampling, structural reliaility. Introduction The Bayesian learning process is formalized in Bayes theorem: p( d = c E L( d p( ( The prior proailistic model p( represents the current state of knowledge aout the vector of uncertain model parameters R M. Information (data d that ecomes availale in form of measurements or oservations is expressed in the likelihood function L( d = p(d. The posterior proailistic model p( d descries the comined knowledge from the prior distriution and the likelihood function. Consequently, the posterior uncertainty aout is reduced compared to its prior uncertainty. The constant c E is a normalizing scalar that is called the evidence (marginal likelihood; it measures the plausiility of the assumed stochastic model class [] and is defined as: c E = L( d p( d ( The evidence c E is required for Bayesian model class selection and model averaging [, 3]. Except for some special cases, the posterior distriution usually cannot e derived analytically, and posterior samples have Corresponding author addresses: wolfgang.etz@tum.de (Wolfgang Betz, jimeck@caltech.edu (James L. Beck, iason.papaioannou@tum.de (Iason Papaioannou, strau@tum.de (Daniel Strau Manuscript, pulished in Proailistic Engineering Mechanics 53 (8 4 to e generated numerically. However, it is typically challenging to compute the evidence c E, ecause of the multidimensional integral in Eq. (. The BUS approach [4] transforms the Bayesian inference prolem to a structural reliaility prolem. In structural reliaility, proailities of rare events are estimated [5, 6, 7]. By interpreting the Bayesian updating prolem as a rare event estimation, existing structural reliaility methods can e used to perform the Bayesian analysis (see e.g., [8]. An estimate for the evidence c E is otained as a yproduct of BUS; in BUS, the evidence is directly related to the rare event proaility. In BUS, prior to the analysis, a constant c has to e selected: On the one hand, c should not e smaller than the maximum value that the likelihood function can take; i.e., c L max. On the other hand, selecting c conservatively large decreases the efficiency of the method. Therefore, an appropriate choice of c is crucial. However, in many cases the maximum of the likelihood function is not known in advance. We propose a post-processing step for BUS that returns an uniased estimate for the evidence and samples from the posterior distriution, even if /c is selected smaller than the maximum of the likelihood function. Like the original BUS approach, the proposed extension of BUS can e comined with any structural reliaility method. The applicaility of the proposed post-processing step is demonstrated y means of the standard Monte Carlo method, which corresponds to the rejection sampling approach when comined with BUS. The structure of the paper is as follows: In Section the BUS approach is formally introduced and the role of the scaling constant c in BUS is discussed. In Section 3 the post-processing DOI:.6/j.proengmech.8.3.4

2 step for BUS is proposed that can cope with c L max <. In Section 4 the proposed post-processing step is investigated in comination with rejection sampling for different example prolems. Section 5 riefly summarizes the otained findings.. Bayesian updating with structural reliaility methods.. The idea ehind BUS Strau and Papaioannou show in [4] that a Bayesian updating prolem can e interpreted as a structural reliaility prolem. The principal idea ehind BUS (Bayesian Updating with Structural reliaility methods is to augment the space of random variales spanned y with an additional uniformly distriuted random variale π with support [, ]. The updating prolem is then expressed as a structural reliaility prolem in the so-otained augmented random variale space [, π]. The failure domain of this reliaility prolem is defined as: = {π c L( d} (3 where c is a positive constant chosen such that c L( d is maintained for all. The domain is illustrated in Fig.. Note that denotes oth the failure domain and the corresponding event. The link etween the domain and the actual Bayesian updating prolem is: Samples from the prior distriution of that are in follow the posterior distriution [4]. In reliaility analysis, the limit-state function is defined such that: g(, π if [, π] ; and g(, π > if [, π] is outside of (see Fig.. A limit-state function g(, π that descries the failure domain defined in Eq. (3 is: g(, π = π c L( d (4 Optimally, the constant c should e chosen as the reciprocal of the maximum of the likelihood function, denoted L max [4]. However, L max is not always known in advance. In such cases, it is difficult to select c appropriately. The implications of choosing c too large or too small are discussed in Section.5... Estimating the evidence in BUS An estimate for the evidence c E is otained as a y-product of BUS. Let p e the proaility that samples [, π] from the prior distriution fall into, i.e.: p = Pr [] = Pr [ g(, π ] (5 p is the target quantity of interest in a reliaility analysis and called the proaility of failure. In BUS, p is directly linked to the evidence c E through the constant c [4]: c E = p c Note that some reliaility methods allow evaluating uncertainty ounds for the estimate of p. In this case, the statistical uncertainty in the estimated evidence c E can e quantified directly, as the evidence is directly proportional to p. Uncertainty ounds for the estimated reliaility can e evaluated with (6 /c max. likelihood uniform random variale π/c prior p( likelihood L( d Figure : Illustration of the principle of the rejection sampling algorithm Algorithm (. The highlighted region is the domain defined in Eq. (3. The limit-state function g(, π introduced in Eq. (4 is smaller or equal than zero within (it is zero at the oundary of, and larger than zero outside of. Samples elow the likelihood (i.e., the samples contained in are independent samples from the posterior distriution. In this example, 43 out of 4 samples are accepted. most simulation methods. For Monte Carlo simulation this is straight-forward to do. Moreover, approximate ounds can e derived for any importance sampling method employing a normal approximation of the estimator (e.g. [9]. For Suset Simulation, some ounds were proposed in [, ]..3. Structural reliaility methods in BUS BUS employs structural reliaility methods to perform Bayesian updating. The most straight-forward application of BUS is rejection sampling which corresponds to crude Monte Carlo simulation in the context of structural reliaility. Rejection sampling within BUS is explained in detail in Section.4. As was pointed out in [4], it is possile to use other structural reliaility methods instead of the simple rejection sampling algorithm (i.e., instead of a Monte Carlo simulation. BUS is often comined with Suset Simulation (see for example [4,, 3, 4, 5, 6], ecause this method is efficient for very small failure proailities and its performance does not depend on the dimension M of the vector of uncertain model parameters. Apart from rejection sampling and Suset Simulation, the BUS approach has een comined with the first order reliaility method (FORM and line sampling in [8]. FORM solves the reliaility prolem only approximately y linearizing the limit-state function at the most proale point of failure [7, 8]. The line sampling method computes a correction factor for the linearized solution y performing a specified numer of line searches parallel to an important direction pointing to the failure surface [9,, ]. To generate realizations of the posterior, an additional postprocessing step is required: esides computing the proaility of failure, samples located in have to e returned. In Monte Carlo simulation and Suset Simulation, samples located in are directly generated during the reliaility analysis. Importance sampling ased reliaility methods require an additional

3 re-sampling step ased on the importance weights associated with the failed samples to produce samples of equal weights. In case of FORM, samples of the approximated failure domain can e easily generated. Samples from the posterior distriution can e further used for posterior prediction of quantities of interest. A special application is the use of BUS for updating the proaility of rare events ased on oserved system response. In such case, as the target quantity of interest is the posterior proaility of failure, no posterior samples have to e generated and the updating prolem can e directly solved y structural reliaility methods [8, ]. where B denotes the eta function. The most proale value of p a posteriori is K/n. The expectation of p is: E [ p K, n ] = K + ( n + The variance of the distriution in Eq. ( can e derived as: Var [ p K, n ] = (K + (n K + (n + (n + 3 ( For increasing K and n, Eqs. (7 and (, as well as Eqs. (8 and (, approach the same values..4. The rejection sampling algorithm The most trivial application of BUS results in the rejection sampling algorithm [4, 3] (see Algorithm ( elow. This algorithm repeatedly proposes a sample [, π] from the prior distriution and accepts the sample if it is located in the failure domain; i.e., if [, π]. The accepted sample is a sample from the posterior distriution. The algorithm is repeated until K posterior samples are generated. The posterior samples resulting from the rejection sampling algorithm are statistically independent. The quantity p is the proaility that a proposed sample is accepted; p is also referred to as the acceptance proaility. An uniased estimate ˆp of p is [4]: p ˆp = K n where n is the total numer of prior samples that were proposed to generate K posterior samples. Note that the estimator K/n produces a iased estimate for p [4]. An uniased estimate of the variance of ˆp is [5]: Var [ ˆp ] ( ˆp ˆp n The estimates given in Eqs. (7 and (8 are frequentist estimates. To appropriately quantify the uncertainty aout p ased on the outcome of a particular run of rejection sampling, we employ a Bayesian approach. The numer n of prior samples needed to generate K posterior samples in the rejection sampling algorithm follows a negative inomial distriution. Thus, having oserved a certain n for a given K, the likelihood of p is: ( n L(p n, K = (p K ( p n K (9 K where ( n K denotes the inomial coefficient. The eta distriution acts as conjugate prior for the Bayesian updating. If one selects the prior distriution for p ased on the Principle of Maximum Information Entropy [6, 7] with only normalization as constraint, then the resulting prior distriution is the uniform distriution on [, ], which is a special case of the eta distriution. In this case, the posterior is also a eta distriution and can e expressed as: p(p K, n = p K ( p n K B(K +, n K + (7 (8 ( 3 Algorithm (: rejection sampling As input the algorithm requires: K, the total numer of samples to draw from the posterior distriution. c, selected such that c L max. The algorithm evaluates the evidence c E and returns K unweighted and statistically independent posterior samples with k =,..., K.. Initialize counters k = and n =.. while (k K do: (a Propose sample [, π]: i. Draw from the prior distriution p(. ii. Draw π from the uniform distriution that has support [, ]. ( if ( g(, π then: i. Increase the counter k = k +. ii. Accept the proposed sample as a posterior sample, i.e.: set =. (c Increase the counter n = n Estimate p y means of Eq. (7 or Eq. (. 4. Evaluate the evidence c E = p /c On average, the algorithm requires K/p samples from the prior distriution to generate K samples from the posterior distriution. The principle of the rejection sampling algorithm is illustrated in Fig.. Note that Algorithm ( is similar to a Monte Carlo simulation for solving the structural reliaility prolem that has limit-state function g(, π and random variales [, π]. The difference is that in a Monte Carlo simulation typically the total numer of samples n is fixed whereas in Algorithm ( the numer K of samples to e generated in the domain is specified. The main advantage of rejection sampling is that it produces independent samples from the posterior distriution. However, if the posterior distriution does not match the prior distriution well, p ecomes small, which renders the rejection sampling algorithm inefficient. As a consequence, rejection sampling is typically inefficient and usually more advanced reliaility methods are employed see Section.3. However, ecause rejection sampling is straightforward and easy to understand, we use it here to demonstrate the consequences of choosing the constant c such that c < L max.

4 .5. The constant c in BUS An appropriate selection of the constant c is crucial in BUS. The constant c in BUS is motivated y rejection sampling. For the equivalent rejection sampling prolem targeted in BUS, the value of c c E is a scaling factor for the proposal distriution p( that must e chosen such that the scaled proposal distriution is an envelope function for the target distriution [8]; i.e., c c E p( p( d must e maintained for all, which reduces to c L max if p( d is replaced with Eq. (. On the one hand, if a value of c that is larger than the maximum L max of the likelihood function is selected, the efficiency of the approach decreases; the acceptance proaility p of BUS decreases linearly with c, as seen from Eq. (6. On the other hand, if a value of c that is smaller than the maximum L max of the likelihood function is selected, the scaled proposal distriution does not envelope the target distriution for all and, thus, BUS does not produce samples that follow the posterior distriution. Instead, in such cases, BUS produces samples that follow a distriution p (c trunc that is proportional to the part of the target distriution that is contained within the envelope of the proposal distriution. The distriution p (c trunc can e derived as: p (c L(c trunc ( d p( trunc ( d = = c L(c trunc ( d p( c (c E p (c (3 (c where c E = p (c /c and p(c are the associated evidence and acceptance proaility that correspond to the selected c < L max. trunc ( d is defined as: trunc ( d = min ( L( d, c (4 Let w (c e a correction factor such that c E = w (c c E (c One can expand the correction factor w (c : w (c = = c E c (c E = = L( d p( d c E (c (5 (6 (7 L( d trunc ( d trunc ( d p( d c (c E (8 L( d p(c trunc ( d d trunc ( d (9 The quantity defined in Eq. (9 can e estimated from the samples { (c } k=,...,k generated with BUS and c < L max, which follow distriution p (c trunc ( d: Using Eq. (5 and the estimate for w (c given in Eq. (, one can correct the evidence computed with c < L max. Let p e the proaility of Eq. (5 for a choice of c = /L max. From Eq. (6, it follows that p = p (c c w(c ( L max where c (c E = p (c /c and c E = p L max is used (Eq. (6 does not hold if c < L max. Note that c L max implies that p p (c, as the relative size of the failure domain increases with decreasing c according to Eq. (3 (c implies p (c. For c > L max, p > p (c implies p = p (c c /L max. Moreover, L( (c d L(c trunc ((c d clearly holds independent of c, due to Eq. (4. Thus, the values that w (c can take are ounded: w (c c L max. For c L max the correction factor is w (c =. For c L max the correction factor must e smaller than c L max in order to maintain p p (c (see Eq. (. Additionally, it is clear that c (c E underestimates the actual evidence c E. The relative error in the evidence associated with selecting c too small (i.e., c < L max is: ε (c c E = = w = c (c E (c c E = p(c p trunc ( d L( d p( L( d trunc ( d = E d L(c L( d c L max (3 c E d (4 (5 For c, the error ε c ( E approaches one and the generated samples follow the prior distriution. For c = L max, it is ε (/L max c E = and the samples follow the posterior distriution. In the following, post-processing steps for BUS are proposed, that allow c in BUS to e chosen smaller than the maximum of the likelihood function. The proposed post-processing steps are applicale to all reliaility methods used within the BUS framework. However, the performance of the proposed approach still depends strongly on the selected c. For the particular comination of BUS with Suset Simulation, the authors have recently proposed a different strategy [], denoted abus, to handle the constant c in BUS. In abus, the constant c does not have to e specified initially and is learned adaptively during the simulation. If Suset Simulation is used as reliaility method in BUS, we recommend to use the abus approach, as preliminary comparative numerical studies of performance indicated abus to e more efficient than the post-processing steps proposed in the following. w (c K = K K k= K k= L( (c d trunc ((c d ( max (, c L( (c d ( 4 3. Proposed post-processing steps for BUS 3.. Evidence As discussed in the previous section, BUS does not return samples from the posterior distriution if the constant c is

5 selected smaller than L max. However, employing Eqs. (6 and (, the evidence of the investigated prolem can e estimated even if c is not selected properly. Note that Eq. ( does not require L max to e known. 3.. Posterior samples Since the aim is to otain samples from the posterior distriution, the distriution of the generated samples needs to e corrected, as the samples produced y BUS with c < L max follow the distriution p (c trunc ( d in Eq. (3. One viale strategy is to consider the samples from distriution p (c trunc ( d as weighted samples from the posterior distriution. In this case, one can directly work with the weighted posterior samples or perform a re-sampling step ased on the sample weights to get unweighted samples that asymptotically follow the posterior distriution. Alternatively, a Metropolis-Hastings [9, 3] step can e added to Algorithm ( to ensure that the generated samples follow the posterior distriution asymptotically. We refer to this algorithm as M-H rejection sampling Algorithm (; it is ased on an algorithm presented in [3]. In this contriution, we focus on generating unweighted posterior samples. More specifically, K unweighted posterior samples are generated from K weighted posterior samples. Using this approach, information is lost: samples with large weights tend to e repeated and samples with small weights tend to e discarded. If applicale, the following two strategies can e more efficient instead: (i Instead of generating unweighted posterior samples, the weighted posterior samples otained with BUS and c < L max are directly used. (ii Based on K weighted posterior samples, consideraly more than K unweighted posterior samples are generated y means of a resampling strategy M-H rejection sampling algorithm The initial part of the proposed algorithm (Steps ( and ( in Algorithm (, which produces samples from distriution p (c trunc ( d, is equivalent to rejection sampling (Algorithm (. These Steps ( and ( can in principle e replaced y any reliaility method. In case the applied reliaility method produces weighted samples from p (c trunc ( d, the weights of Eq. ( need to e multiplied y the weights resulting from the reliaility analysis. In Step (3, the evidence is computed ased on the correction factor w (c derived in Section.5. If (i either c E or p are the quantities of interest, or (ii weighted samples from the posterior distriution are sufficient, the procedure can e terminated at this point (i.e., after Step (3. Assuming that the applied reliaility method resulted in unweighted samples from p (c trunc ( d, the importance weight of the jth generated sample is L( (c ( j d/l(c trunc ((c ( j d. Since the aim is to generate posterior samples with equal weights, the following two additional steps are performed (alternatively, a re-sampling step could e used: In Step (4, an initial seed is selected to initiate the Metropolis-Hastings algorithm. This seed is selected through a re-sampling step such that it asymptotically follows the posterior distriution. Finally, in Step (5, the distriution of the samples is corrected y means of MCMC, employing 5 the Metropolis-Hastings algorithm proposed in [3]. As the set of samples generated in steps ( and ( are used as candidate samples of the MCMC algorithm, the employed proposal distriution is p (c trunc ( d. Therefore, the Metropolis-Hastings accept/reject-ratio of the kth candidate sample (c r k = min, p ( (c d p ( (k d p(c = min, L ( (c d L ( L (k trunc p (c trunc (c trunc trunc is: ( (k d ( (c (6 d ( (k d ( (c d (7 where k {,..., K}. The candidate sample is accepted with proaility r k and rejected with proaility r k. It is a standard result that this Metropolis-Hastings algorithm has stationary distriution p( d and, thus, the generated samples asymptotically follow the posterior distriution. The ratio r k can e simplified to: r k = min, max ( L ( (c d, c L ( (8 (k This can e readily shown y distinguishing four different cases: Case (: L ( (c d c and L ( (k c L ( d = trunc ( d holds and, consequently, oth Eq. (7 and Eq. (8 ecome one. Case (: L ( (c d > c and L ( (k c As in the previous case, Eqs. (7 and (8 ecome one. Case (3: L ( (c d c and L ( (k > c Eqs. (7 and (8 result in c /L ( (k. Case (4: L ( (c d > c and L ( (k > c trunc ( d = c ans so oth Eq. (7 and Eq. (8 translate to min (, L ( (c d /L ( (k. Consequently, Eqs. (7 and (8 are indeed equivalent. The urn-in period of the Markov chain in Step (5 is very short for reasonaly large K, as the initial seed asymptotically follows the stationary distriution of the chain. Note that the likelihood function needs to e evaluated only in Step ( of the algorithm. If the evaluation of the model ehind the likelihood is computationally demanding, the main computational urden of the algorithm is in Step (. All other steps have a small impact on the run-time in this case. Algorithm (: M-H rejection sampling As input the algorithm requires: K, the total numer of samples to draw from the posterior distriution. c, where c may e selected smaller than L max.

6 The algorithm evaluates the evidence c E and returns K unweighted ut dependent posterior samples with k =,..., K.. Initialize counters k = and n =.. while (k K do: (a Propose sample [, π]: i. Draw from the prior distriution p(. ii. Draw π from the uniform distriution that has support [, ]. ( if ( g(, π then: i. Accept the proposed sample as a posterior sample of the truncated posterior distriution, i.e.: set (c =. ii. Increase the counter k = k +. (c Increase the counter n = n Evaluate the evidence c E (a Estimate p (c y means of Eq. (7 or Eq. (. ( Evaluate the quantity w (c, defined in Eq. (. (c An estimate ĉ E,K of c E is otained through Eq. (5, where c (c E = p (c /c. 4. Pick sample ( : (a Resampling step: draw index i randomly from the list {,..., K} where the jth element of the list is associated with proaility ( swap (c (i with (c ( L((c ( j d trunc ((c ( j d/ ( w (c K. (c Set ( = (c ( (d Set k = 5. while (k K do: (a Draw a sample u randomly from a uniform distriution with support [, ]. ( Evaluate the accept/reject-ratio r k defined in Eq. (8; i.e.: r k = min ( ( max L (c d,c, d L ( (k (c if (u r k then accept the candidate sample: Set = (c else; i.e., if (u > r k then reject the candidate sample: Set = (k. (d Increase the counter k = k +. The principle of Algorithm ( is illustrated in Fig.. Contrary to rejection sampling, this algorithm does not produce K independent samples. Instead, the K generated posterior samples are dependent, ecause of the acceptance/rejection step in the Metropolis-Hastings algorithm (steps (5 and (5c in Algorithm (. The smaller the corresponding acceptance rate, the larger is the induced dependency. A particular advantage of Algorithm ( compared to Algorithm ( is that Algorithm ( can e applied without knowing L max. However, the selected c still has a considerale influence on the efficiency of the algorithm. General guidelines on how to select c are given in [4]. A potential strategy to select the value of c in Algorithm ( is to 6 max. likelihood /c uniform random variale π/c prior p( likelihood L( d Figure : Illustration of the principle of the M-H rejection sampling algorithm Algorithm (. Note that only the lack samples are independent; the lue samples introduce a dependency (at least one lack sample has the same as a lue sample that decreases the efficiency of the sampling algorithm. For clarity, the ordinate of a sample in is selected randomly etween zero and L( d. perform an initial Monte Carlo simulation from the prior distriution with a small numer of samples, and to select c equal to the largest oserved likelihood value. The initially generated Monte Carlo samples can e re-used in Algorithm (. Note that the performance of the proposed post-processing step does not depend on the dimension of. The actual location of a generated sample is not used, only the associated likelihood is considered. 4. Numerical investigations In this section, the performance of Algorithm ( is investigated y means of three example prolems. 4.. Definitions For the discussion of the example prolems, the following quantities are introduced: acts as a normalized version of c : Let (, ] e defined as = cl max ; i.e., = c = L max and = c =. The performance of the investigated algorithm is assessed for different values of. 3,max represents the largest oserved likelihood multiplied with c in a set of 3 independent posterior samples. Note that 3,max is a stochastic quantity. c E,ref denotes the actual value of the evidence of the example prolem. The quantity p,ref is defined as p,ref = c E,ref /L max. ĉ E,K is the evidence estimated y the investigated algorithm ased on K posterior samples. Moreover, ĉ (c E,K denotes the evidence estimated with Algorithm ( and scaling constant c.

7 a K and s K denote the estimated mean and standard deviation of the first component of the parameter vector in a set of K posterior samples. Note that a K and s K are random variales for finite K. If the investigated algorithm produces posterior samples, we have E[a K ] = E[ d] and E[s K ] = σ[ d]. If the generated posterior samples are independent, then σ[a K ] = K σ[ d]. For dependent samples, σ[a K ] can e expressed as σ[a K ] = + γ K σ[ d] (9 where γ quantifies the dependency of the generated samples. - N eff is the numer of effectively independent samples in the generated set of K posterior samples (of the first component. This quantity specifies how many truly independent posterior samples of would give the same variance in the sample mean as Var[a K ] otained y BUS. N eff = ( E [sk ] = K σ [a K ] + γ (3 Note that N eff can e interpreted as a measure for the dependency of the generated posterior samples; for fixed K, the smaller N eff the stronger the dependency. n K is the total numer of prior samples needed to generate K posterior samples in Algorithm (. ˆ represents a posterior sample otained with the investigated algorithm. 4.. Investigated example prolems Example prolem a: A one dimensional prolem with a standard Gaussian prior. The uncertain parameter is denoted y. The likelihood for is a Gaussian distriution that has mean µ l = 3 and standard deviation σ l =.3. This prolem has an analytical solution: the posterior distriution is Gaussian with mean and standard deviation of µ l / ( σ l + =.75 and / + σ l =.87, respectively. The maximum of the likelihood is L max = / ( σ l π =.33. The evidence of the example prolem is c E,ref = ϕ µ l / / + σ l ( = 6.6 3, + σ l where ϕ( is the PDF of the standard Gaussian distriution. Consequently, p,ref of the rejection sampling algorithm is , if c = /L max. Example prolem : The formulation of this prolem is equivalent to Example prolem a, with the only difference eing that the likelihood function for has mean µ l = 5 and standard deviation σ l =.. The posterior mean and standard deviation are 4.8 and.96, respectively. The evidence for this prolem is c E,ref = Consequently, p,ref of the rejection sampling algorithm is.8 6, if c = /L max and L max = Example prolem : A -dimensional prolem with prior i= ϕ ( i, where ϕ( denotes the standard Gaussian PDF and i is the ith component of the -dimensional parameter vector. The likelihood function of the prolem is i= ϕ ( i µ l σ /σl l, with σ l =.6 where the value µ l is chosen such that the evidence c E,ref ecomes 6 ; i.e., µ l =.46. The posterior mean and standard deviation of each component of are.34 and.5, respectively. The theoretical maximum that the likelihood function can take is L max = (.6 π = Thus, p,ref of the rejection sampling algorithm is.34 4, if c = /L max. Example prolem 3: A two-story frame structure represented as a two-degree-of-freedom shear uilding model is investigated. This example prolem was originally discussed in [3]. BUS is applied in [4, 3] to solve this prolem. The two stiffness coefficients k (first story and k (second story of the model are considered uncertain. The uncertainty in k and k is expressed as k = k n and k = k n, where and are uncertain parameters and k n = N/m. The prior distriutions of and are modeled as independent log-normal distriutions with modes.3 and.8 and standard deviation. The lumped story masses m (first story and m (second story are considered deterministic and have masses m = kg and m = 6. 3 kg. The influence of damping is neglected. Bayesian updating is performed ased on the measured first two eigen-frequencies of the system: f = 3.3Hz and f = 9.83Hz. The likelihood of the prolem is expressed as L( = exp (.5 J(/σε, ( f j ( where σ ε = /6 and J( = j= λ j f with j λ = λ = and f j ( as the jth eigen-frequency predicted y the model. The posterior distriution of this prolem is imodal [4, 3]. The reference evidence is: c E,ref = p,ref =.5 3 (since L max =. Moreover, E[k d] =. and σ[k d] =.66. The reference solutions of the presented example prolems are summarized in Tale. In addition to the quantities c E,ref, L max, p,ref, E[ d] and σ[ d], some statistics of quantity 3,max are listed in the last four rows. The statistics for 3,max can e computed explicitly for Example prolems a, and, and are evaluated y means of a large numer of repeated runs of the rejection sampling algorithm with K = 3 for Example prolems 3. It is ovious that Example prolem differs from the other prolems with respect to the statistics of 3,max: For Example prolem, E[ 3,max] =.46 and Pr [ 3,max >.8 ] = 4, whereas E[ 3,max] and Pr [ 3,max >.8 ] = for all other example prolems. Consequently, it is extremely unlikely that a 3,max close to one will e oserved in Example prolem in a set of 3 posterior samples Assessment of the performance for < We assess the performance of the introduced example prolems for Algorithm ( and different values of (, ]. The

8 Tale : Reference solution of the investigated example prolems. Example prolem a Example prolem Example prolem Example prolem 3 c E,ref L max p,ref E[ ] σ[ ] E[ 3,max] Pr [ 3,max >.8 ] 4 Pr [ 3,max <.99 ] Pr [ 3,max <.999 ] 5.38 = c Lmax..5.9 a ĉ E, 3 c E,ref..9 ĉ E, 3 c E,ref..9 ĉ E, 3 c E,ref..9 3 ĉ E, 3 c E,ref Figure 3: Statistics of the estimated evidence ĉ E, 3 divided y the reference value of the evidence c E,ref for different values of = L max c. The thick lack line represents the average otained with Algorithm ( as the average is with good approximation, the estimate ĉ E, 3 can e considered uniased. The highlighted area shows the 9% confidence interval of the estimated ĉ E, 3 /c E,ref computed with Algorithm (. The dashed red line shows the average otained with Algorithm (; the ias in the estimated evidence clearly increases for decreasing. The underlying data was generated y solving the updating prolem repeatedly, generating K = 3 posterior samples in each run. numer of posterior samples generated per updating run is K = 3. The smallest value of investigated in the studies is 4. The results are presented in Figs. (3 (5. Fig. 3 shows the statistics of the estimated evidence, Fig. 4 shows the statistics of the generated posterior samples of, and Fig. 5 shows the statistics of oth E[n 3] and σ[a 3]. The data used to plot Figs. (3 (5 was generated y solving the updating prolem repeatedly, generating K = 3 posterior samples in each run. The numer of times the prolem was solved is 5, 4, 9 3 and 8 4 for Example prolems a,, and 3, respectively. The statistics of the estimated evidence ĉ E, 3 as a function of are presented in Fig. 3. Independent of the value of, no. 8 ias in the estimate of the evidence can e oserved. Additional to the mean, the 9% confidence interval of ĉ E, 3/c E,ref is shown. For [.5, ], the influence of on the interval can e considered negligile, independent of the example prolem. The mean of ĉ (c /c E, 3 E,ref computed with Algorithm ( and c < L max is also shown in Fig. 3. The results clearly show that the estimate of the evidence ĉ (c otained with Algorithm ( E, 3 and c < L max underestimates the true evidence c E,ref of the example prolem. Fig. 4 shows that the estimates of oth the mean and the 9% confidence interval of the posterior samples ˆ otained with the M-H rejection sampling algorithm (Algorithm ( are uniased, independent of the choice of. The same cannot e said aout Algorithm ( and c < L max. In this case, the ias in the mean and the deviation from the 9% confidence interval of the samples generated with Algorithm ( increases with decreasing. This effect is more dominant in Example prolems a and than in Example prolems and 3. The samples produced y the standard rejection sampling are independent, whereas the samples generated with Algorithm ( and < are dependent. The dependency of the samples increases with decreasing. This effect is illustrated in the right part of Fig. 5. The effective numer of independent posterior samples N eff decreases with. Thus, the efficiency of Algorithm ( with respect to the generated posterior samples decreases with decreasing. However, the computational cost to generate 3 (dependent posterior samples decreases also with decreasing see left part of Fig. 5. For [.5, ], the influence of on the 9% confidence interval of ĉ E, 3/c E,ref (shown in Fig. 3 was found to e negligile, independent of the example prolem. However, this does not imply that it is irrelevant whether the analysis is performed with = or, for example, with =.5. The reason is that although the discussed confidence interval remains stale, the computational cost decreases with. The computational cost is expressed in terms of the average numer of prior samples E[n 3] required to perform the analysis (see left part of Fig. 5. For [.5, ] the value of E[n 3] is with good approximation proportional to. Thus, working with a reduced is computationally efficient. The plots shown in Fig. 5 demonstrate that the relative size of E[n 3] decreases faster than the relative size of N eff. Thus,

9 = c Lmax..5 a.9. ˆ E[ ].95 5 ˆ E[ ] - ˆ E[ ] ˆ E[ ] Figure 4: Statistics of the estimated posterior samples ˆ divided y the reference mean of for different values of = L max c. The thick lack line represents the average otained with Algorithm ( as the average is with good approximation, the sample mean can e considered uniased. The highlighted area shows the 9% confidence interval otained with Algorithm (. The dashed red lines show the average and confidence intervals otained with Algorithm (. The underlying data was generated y solving the updating prolem repeatedly, generating K = 3 posterior samples in each run. Algorithm ( is more efficient for smaller than for larger. In general, the results presented in this section demonstrate that neither Algorithm ( nor Algorithm ( is a particularly favorale strategy. Instead, if a total numer n of model calls that one is willing to invest can e specified, it is etter to generate n samples from the prior distriution and regard the prior samples as weighted samples from the posterior distriution (i.e.,. In this case, the sample weights are proportional to the likelihood function, and the evidence can e estimated as the sample mean of the likelihood function otained with the samples from the prior distriution (see e.g., [33]. However, the advantage of Algorithm ( (and the BUS approach in general is that the rejection sampling step (step ( can e replaced y any structural reliaility method. A reliaility method that is particularly well suited for BUS is Suset Simulation, it can handle prolems with many uncertain parameters and can efficiently estimate very small proailities that may arise within BUS (see e.g. [4]. Moreover, an estimate for the evidence c E is otained as a y-product of BUS. 5. Concluding remarks Bayesian Updating with Structural reliaility methods (BUS reinterprets the Bayesian updating prolem as a structural reliaility prolem. The evidence of the inference prolem is otained as a y-product of BUS. In this paper, we propose a post-processing step for BUS that does not require the scaling constant c to e chosen such that c L max. By means of the proposed post-processing step, an uniased estimate for the evidence as well as samples from the posterior distriution E[n 3 ] 3 p a 3.5 N eff 3. Figure 5: The average numer of model calls (left side of plot, x-axis and the standard deviation of the estimate a 3 (right side of plot, x-axis are shown for [, ] (y-axis of su-plots and Algorithm (. The quantity is plotted on the y-axis of the su-plots and ranges from zero to one. Horizontally, the figure is split in two parts: (left-part On the left-hand side of the figure the average numer E[n 3] of prior samples required to solve the prolem is divided y the average numer of prior samples needed in standard rejection sampling; i.e., y 3 /p. For [, ], this quantity is within [, ]; it measures how many prior samples were needed to solve the prolem compared to standard rejection sampling. (right-part On the right-hand side of the figure the effective numer of independent posterior samples (N eff divided y the total numer ( 3 of posterior samples is shown. can e otained even if the maximum of the likelihood function L max is not known. The performance of the proposed postprocessing step was demonstrated y means of different example prolems and rejection sampling. In practice, the proposed post-processing step for BUS can e comined with any structural reliaility method. Acknowledgments The authors acknowledge the support of the Technische Universität München Institute for Advanced Study, funded y the German Excellence Initiative. The first author also would like to thank the California Institute of Technology (Caltech for his research stay at Caltech, which provided the asis for this work. References [] J. L. Beck, Bayesian system identification ased on proaility logic, Structural Control and Health Monitoring, vol. 7, no. 7, pp ,. [] D. J. MacKay, Bayesian interpolation, Neural computation, vol. 4, no. 3, pp , 99. [3] L. Wasserman, Bayesian model selection and model averaging, Journal of mathematical psychology, vol. 44, no., pp. 9 7,. [4] D. Strau and I. Papaioannou, Bayesian updating with structural reliaility methods, Journal of Engineering Mechanics, 5.

10 [5] O. Ditlevsen and H. O. Madsen, Structural reliaility methods. Wiley New York, 996. [6] R. E. Melchers, Structural reliaility analysis and prediction. John Wiley & Son Ltd, 999. [7] D. Strau, Engineering risk assessment, in Risk A Multidisciplinary Introduction, pp , Springer, 4. [8] D. Strau, I. Papaioannou, and W. Betz, Bayesian analysis of rare events, Journal of Computational Physics, vol. 34, pp , 6. [9] M. Lemaire, Structural reliaility. John Wiley & Sons, 3. [] S.-K. Au and J. L. Beck, Estimation of small failure proailities in high dimensions y suset simulation, Proailistic Engineering Mechanics, vol. 6, no. 4, pp ,. [] K. M. Zuev, J. L. Beck, S.-K. Au, and L. S. Katafygiotis, Bayesian postprocessor and other enhancements of Suset Simulation for estimating failure proailities in high dimensions, Computers & structures, vol. 9, pp ,. [] W. Betz, I. Papaioannou, J. L. Beck, and D. Strau, Bayesian inference with suset simulation: strategies and improvements, Computer Methods in Applied Mechanics and Engineering, vol. 33, pp. 7 93, 8. [3] F. DiazDelaO, A. Garuno-Inigo, S. Au, and I. Yoshida, Bayesian updating and model class selection with Suset Simulation, Computer Methods in Applied Mechanics and Engineering, 7. [4] W. Betz, I. Papaioannou, and D. Strau, Adaptive variant of the BUS approach to Bayesian updating, in Proceedings of 9th European Conference on Structural Dynamics, 4. [5] I. Papaioannou, W. Betz, and D. Strau, Bayesian model updating of a tunnel in soft soil with settlement measurements, Geotechnical Safety and Risk IV, p. 35, 3. [6] W. Betz, C. M. Mok, I. Papaioannou, and D. Strau, Bayesian model caliration using structural reliaility methods: application to the hydrological ac model, in Vulneraility, Uncertainty, and Risk: Quantification, Mitigation, and Management (M. Beer, S.-K. Au, and J. W. Hall, eds., pp , ASCE, 4. [7] H. Lind, An exact and invariant first-order reliaility format., Journal of Engineering mechanics, vol., 974. [8] R. Rackwitz and B. Flessler, Structural reliaility under comined random load sequences, Computers & Structures, vol. 9, no. 5, pp , 978. [9] M. Hohenichler and R. Rackwitz, Improvement of second-order reliaility estimates y importance sampling, Journal of Engineering Mechanics, vol. 4, no., pp , 988. [] P. Koutsourelakis, H. Pradlwarter, and G. Schuëller, Reliaility of structures in high dimensions, Part I: algorithms and applications, Proailistic Engineering Mechanics, vol. 9, no. 4, pp , 4. [] R. Rackwitz, Reliaility analysis a review and some perspectives, Structural safety, vol. 3, no. 4, pp ,. [] D. Strau, Reliaility updating with equality information, Proailistic Engineering Mechanics, vol. 6, no., pp ,. [3] A. F. Smith and A. E. Gelfand, Bayesian statistics without tears: a sampling resampling perspective, The American Statistician, vol. 46, no., pp , 99. [4] J. Haldane, On a method of estimating frequencies, Biometrika, vol. 33, no. 3, pp. 5, 945. [5] D. Finney, On a method of estimating frequencies, Biometrika, vol. 36, no. /, pp , 949. [6] E. T. Jaynes, Papers on proaility, statistics and statistical physics. Dordrecht, Holland: D. Reidel Pulishing, 983. [7] E. T. Jaynes, Proaility theory: the logic of science. Camridge university press, 3. [8] B. D. Flury, Acceptance rejection sampling made easy, SIAM Review, vol. 3, no. 3, pp , 99. [9] N. Metropolis, A. W. Rosenluth, M. N. Rosenluth, A. H. Teller, and E. Teller, Equation of state calculations y fast computing machines, J. Chem. Phys, vol., p. 87, 953. [3] W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, vol. 57, no., pp. 97 9, 97. [3] L. Tierney, Markov chains for exploring posterior distriutions, the Annals of Statistics, pp. 7 78, 994. [3] J. L. Beck and S.-K. Au, Bayesian updating of structural models and reliaility using Markov chain Monte Carlo simulation, Journal of Engineering Mechanics, vol. 8, no. 4, pp ,. [33] N. Chopin, A sequential particle filter method for static models, Biometrika, vol. 89, no. 3, pp ,.

Transitional Markov Chain Monte Carlo: Observations and Improvements

Transitional Markov Chain Monte Carlo: Observations and Improvements Transitional Markov Chain Monte Carlo: Observations and Improvements Wolfgang Betz, Iason Papaioannou, Daniel Straub Engineering Risk Analysis Group, Technische Universität München, 8333 München, Germany

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm On the Optimal Scaling of the Modified Metropolis-Hastings algorithm K. M. Zuev & J. L. Beck Division of Engineering and Applied Science California Institute of Technology, MC 4-44, Pasadena, CA 925, USA

More information

MCMC algorithms for Subset Simulation

MCMC algorithms for Subset Simulation To appear in Probabilistic Engineering Mechanics June 2015 MCMC algorithms for Subset Simulation Iason Papaioannou *, Wolfgang Betz, Kilian Zwirglmaier, Daniel Straub Engineering Risk Analysis Group, Technische

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

A732: Exercise #7 Maximum Likelihood

A732: Exercise #7 Maximum Likelihood A732: Exercise #7 Maximum Likelihood Due: 29 Novemer 2007 Analytic computation of some one-dimensional maximum likelihood estimators (a) Including the normalization, the exponential distriution function

More information

Sequential Importance Sampling for Structural Reliability Analysis

Sequential Importance Sampling for Structural Reliability Analysis Sequential Importance Sampling for Structural Reliability Analysis Iason Papaioannou a, Costas Papadimitriou b, Daniel Straub a a Engineering Risk Analysis Group, Technische Universität München, Arcisstr.

More information

arxiv: v1 [stat.co] 23 Apr 2018

arxiv: v1 [stat.co] 23 Apr 2018 Bayesian Updating and Uncertainty Quantification using Sequential Tempered MCMC with the Rank-One Modified Metropolis Algorithm Thomas A. Catanach and James L. Beck arxiv:1804.08738v1 [stat.co] 23 Apr

More information

Supplement to: Guidelines for constructing a confidence interval for the intra-class correlation coefficient (ICC)

Supplement to: Guidelines for constructing a confidence interval for the intra-class correlation coefficient (ICC) Supplement to: Guidelines for constructing a confidence interval for the intra-class correlation coefficient (ICC) Authors: Alexei C. Ionan, Mei-Yin C. Polley, Lisa M. McShane, Kevin K. Doin Section Page

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment

Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment Yong Huang a,b, James L. Beck b,* and Hui Li a a Key Lab

More information

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D. A Probabilistic Framework for solving Inverse Problems Lambros S. Katafygiotis, Ph.D. OUTLINE Introduction to basic concepts of Bayesian Statistics Inverse Problems in Civil Engineering Probabilistic Model

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

FinQuiz Notes

FinQuiz Notes Reading 9 A time series is any series of data that varies over time e.g. the quarterly sales for a company during the past five years or daily returns of a security. When assumptions of the regression

More information

Value of Information Analysis with Structural Reliability Methods

Value of Information Analysis with Structural Reliability Methods Accepted for publication in Structural Safety, special issue in the honor of Prof. Wilson Tang August 2013 Value of Information Analysis with Structural Reliability Methods Daniel Straub Engineering Risk

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Polynomial Time Perfect Sampler for Discretized Dirichlet Distribution

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Polynomial Time Perfect Sampler for Discretized Dirichlet Distribution MATHEMATICAL ENGINEERING TECHNICAL REPORTS Polynomial Time Perfect Sampler for Discretized Dirichlet Distriution Tomomi MATSUI and Shuji KIJIMA METR 003 7 April 003 DEPARTMENT OF MATHEMATICAL INFORMATICS

More information

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods Konstantin Zuev Institute for Risk and Uncertainty University of Liverpool http://www.liv.ac.uk/risk-and-uncertainty/staff/k-zuev/

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

1 Hoeffding s Inequality

1 Hoeffding s Inequality Proailistic Method: Hoeffding s Inequality and Differential Privacy Lecturer: Huert Chan Date: 27 May 22 Hoeffding s Inequality. Approximate Counting y Random Sampling Suppose there is a ag containing

More information

FIDUCIAL INFERENCE: AN APPROACH BASED ON BOOTSTRAP TECHNIQUES

FIDUCIAL INFERENCE: AN APPROACH BASED ON BOOTSTRAP TECHNIQUES U.P.B. Sci. Bull., Series A, Vol. 69, No. 1, 2007 ISSN 1223-7027 FIDUCIAL INFERENCE: AN APPROACH BASED ON BOOTSTRAP TECHNIQUES H.-D. HEIE 1, C-tin TÂRCOLEA 2, Adina I. TARCOLEA 3, M. DEMETRESCU 4 În prima

More information

Simple Examples. Let s look at a few simple examples of OI analysis.

Simple Examples. Let s look at a few simple examples of OI analysis. Simple Examples Let s look at a few simple examples of OI analysis. Example 1: Consider a scalar prolem. We have one oservation y which is located at the analysis point. We also have a ackground estimate

More information

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Critical value of the total debt in view of the debts. durations

Critical value of the total debt in view of the debts. durations Critical value of the total det in view of the dets durations I.A. Molotov, N.A. Ryaova N.V.Pushov Institute of Terrestrial Magnetism, the Ionosphere and Radio Wave Propagation, Russian Academy of Sciences,

More information

an introduction to bayesian inference

an introduction to bayesian inference with an application to network analysis http://jakehofman.com january 13, 2010 motivation would like models that: provide predictive and explanatory power are complex enough to describe observed phenomena

More information

Session 3A: Markov chain Monte Carlo (MCMC)

Session 3A: Markov chain Monte Carlo (MCMC) Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Approximate Bayesian Computation by Subset Simulation for model selection in dynamical systems

Approximate Bayesian Computation by Subset Simulation for model selection in dynamical systems Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 199 (2017) 1056 1061 X International Conference on Structural Dynamics, EURODYN 2017 Approximate Bayesian Computation by Subset

More information

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model Luis Manuel Santana Gallego 100 Appendix 3 Clock Skew Model Xiaohong Jiang and Susumu Horiguchi [JIA-01] 1. Introduction The evolution of VLSI chips toward larger die sizes and faster clock speeds makes

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY

CALIFORNIA INSTITUTE OF TECHNOLOGY CALIFORNIA INSTITUTE OF TECHNOLOGY EARTHQUAKE ENGINEERING RESEARCH LABORATORY NEW BAYESIAN UPDATING METHODOLOGY FOR MODEL VALIDATION AND ROBUST PREDICTIONS BASED ON DATA FROM HIERARCHICAL SUBSYSTEM TESTS

More information

Evidence, eminence and extrapolation

Evidence, eminence and extrapolation Research Article Received 9 Decemer 204, Accepted 3 Decemer 205 Pulished online January 206 in Wiley Online Lirary wileyonlinelirary.com DOI: 0.002/sim.6865 Evidence, eminence and extrapolation Gerald

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Reliability updating in geotechnical engineering including spatial

Reliability updating in geotechnical engineering including spatial Appeared in: Computers and Geotechnics, 2012, 42: 44-51. Reliability updating in geotechnical engineering including spatial variability of soil Iason Papaioannou 1,2*, Daniel Straub 1 1 Engineering Risk

More information

A NEW HYBRID TESTING PROCEDURE FOR THE LOW CYCLE FATIGUE BEHAVIOR OF STRUCTURAL ELEMENTS AND CONNECTIONS

A NEW HYBRID TESTING PROCEDURE FOR THE LOW CYCLE FATIGUE BEHAVIOR OF STRUCTURAL ELEMENTS AND CONNECTIONS SDSS Rio 010 STABILITY AND DUCTILITY OF STEEL STRUCTURES E. Batista, P. Vellasco, L. de Lima (Eds.) Rio de Janeiro, Brazil, Septemer 8-10, 010 A NEW HYBRID TESTING PROCEDURE FOR THE LOW CYCLE FATIGUE BEHAVIOR

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line Chapter 27: Inferences for Regression And so, there is one more thing which might vary one more thing aout which we might want to make some inference: the slope of the least squares regression line. The

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement Open Journal of Statistics, 07, 7, 834-848 http://www.scirp.org/journal/ojs ISS Online: 6-798 ISS Print: 6-78X Estimating a Finite Population ean under Random on-response in Two Stage Cluster Sampling

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions

Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions Said Ali (Corresponding author) Department of Statistics, Quaid-i-Azam University, Islamaad 4530, Pakistan & Department

More information

Two-Stage Improved Group Plans for Burr Type XII Distributions

Two-Stage Improved Group Plans for Burr Type XII Distributions American Journal of Mathematics and Statistics 212, 2(3): 33-39 DOI: 1.5923/j.ajms.21223.4 Two-Stage Improved Group Plans for Burr Type XII Distriutions Muhammad Aslam 1,*, Y. L. Lio 2, Muhammad Azam 1,

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis

Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis DAVID KATOSHEVSKI Department of Biotechnology and Environmental Engineering Ben-Gurion niversity of the Negev

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Harmful Upward Line Extensions: Can the Launch of Premium Products Result in Competitive Disadvantages? Web Appendix

Harmful Upward Line Extensions: Can the Launch of Premium Products Result in Competitive Disadvantages? Web Appendix Harmful Upward Line Extensions: Can te Launc of Premium Products Result in Competitive Disadvantages? Faio Caldieraro Ling-Jing Kao and Marcus Cuna Jr We Appendix Markov Cain Monte Carlo Estimation e Markov

More information

Investigation of a Ball Screw Feed Drive System Based on Dynamic Modeling for Motion Control

Investigation of a Ball Screw Feed Drive System Based on Dynamic Modeling for Motion Control Investigation of a Ball Screw Feed Drive System Based on Dynamic Modeling for Motion Control Yi-Cheng Huang *, Xiang-Yuan Chen Department of Mechatronics Engineering, National Changhua University of Education,

More information

Hierarchical sparse Bayesian learning for structural health monitoring. with incomplete modal data

Hierarchical sparse Bayesian learning for structural health monitoring. with incomplete modal data Hierarchical sparse Bayesian learning for structural health monitoring with incomplete modal data Yong Huang and James L. Beck* Division of Engineering and Applied Science, California Institute of Technology,

More information

Efficient Aircraft Design Using Bayesian History Matching. Institute for Risk and Uncertainty

Efficient Aircraft Design Using Bayesian History Matching. Institute for Risk and Uncertainty Efficient Aircraft Design Using Bayesian History Matching F.A. DiazDelaO, A. Garbuno, Z. Gong Institute for Risk and Uncertainty DiazDelaO (U. of Liverpool) DiPaRT 2017 November 22, 2017 1 / 32 Outline

More information

Markov chain Monte Carlo methods in atmospheric remote sensing

Markov chain Monte Carlo methods in atmospheric remote sensing 1 / 45 Markov chain Monte Carlo methods in atmospheric remote sensing Johanna Tamminen johanna.tamminen@fmi.fi ESA Summer School on Earth System Monitoring and Modeling July 3 Aug 11, 212, Frascati July,

More information

Example: Ground Motion Attenuation

Example: Ground Motion Attenuation Example: Ground Motion Attenuation Problem: Predict the probability distribution for Peak Ground Acceleration (PGA), the level of ground shaking caused by an earthquake Earthquake records are used to update

More information

1. Define the following terms (1 point each): alternative hypothesis

1. Define the following terms (1 point each): alternative hypothesis 1 1. Define the following terms (1 point each): alternative hypothesis One of three hypotheses indicating that the parameter is not zero; one states the parameter is not equal to zero, one states the parameter

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll

More information

Approximate Bayesian Computation: a simulation based approach to inference

Approximate Bayesian Computation: a simulation based approach to inference Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Contents. Learning Outcomes 2012/2/26. Lecture 6: Area Pattern and Spatial Autocorrelation. Dr. Bo Wu

Contents. Learning Outcomes 2012/2/26. Lecture 6: Area Pattern and Spatial Autocorrelation. Dr. Bo Wu 01//6 LSGI 1: : GIS Spatial Applications Analysis Lecture 6: Lecture Area Pattern 1: Introduction and Spatial to GIS Autocorrelation Applications Contents Lecture 6: Area Pattern and Spatial Autocorrelation

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo 1 Motivation 1.1 Bayesian Learning Markov Chain Monte Carlo Yale Chang In Bayesian learning, given data X, we make assumptions on the generative process of X by introducing hidden variables Z: p(z): prior

More information

arxiv: v1 [stat.ml] 10 Apr 2017

arxiv: v1 [stat.ml] 10 Apr 2017 Integral Transforms from Finite Data: An Application of Gaussian Process Regression to Fourier Analysis arxiv:1704.02828v1 [stat.ml] 10 Apr 2017 Luca Amrogioni * and Eric Maris Radoud University, Donders

More information

MODIFIED METROPOLIS-HASTINGS ALGORITHM WITH DELAYED REJECTION FOR HIGH-DIMENSIONAL RELIABILITY ANALYSIS

MODIFIED METROPOLIS-HASTINGS ALGORITHM WITH DELAYED REJECTION FOR HIGH-DIMENSIONAL RELIABILITY ANALYSIS SEECCM 2009 2nd South-East European Conference on Computational Mechanics An IACM-ECCOMAS Special Interest Conference M. Papadrakakis, M. Kojic, V. Papadopoulos (eds.) Rhodes, Greece, 22-24 June 2009 MODIFIED

More information

Precision Engineering

Precision Engineering Precision Engineering 38 (2014) 18 27 Contents lists available at ScienceDirect Precision Engineering j o ur nal homep age : www.elsevier.com/locate/precision Tool life prediction using Bayesian updating.

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

ABSTRACT INTRODUCTION

ABSTRACT INTRODUCTION ABSTRACT Presented in this paper is an approach to fault diagnosis based on a unifying review of linear Gaussian models. The unifying review draws together different algorithms such as PCA, factor analysis,

More information

Approximate Inference

Approximate Inference Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo

Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo Comment aout AR spectral estimation Usually an estimate is produced y computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo simulation approach, for every draw (φ,σ 2 ), we can compute

More information

arxiv: v1 [astro-ph.im] 1 Jan 2014

arxiv: v1 [astro-ph.im] 1 Jan 2014 Adv. Studies Theor. Phys, Vol. x, 200x, no. xx, xxx - xxx arxiv:1401.0287v1 [astro-ph.im] 1 Jan 2014 A right and left truncated gamma distriution with application to the stars L. Zaninetti Dipartimento

More information

Brief introduction to Markov Chain Monte Carlo

Brief introduction to Markov Chain Monte Carlo Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE

ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE International Journal for Uncertainty Quantification, 3 (5): 445 474 (213) ASYMPTOTICALLY IDEPEDET MARKOV SAMPLIG: A EW MARKOV CHAI MOTE CARLO SCHEME FOR BAYESIA IFERECE James L. Beck & Konstantin M. Zuev

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

2014 International Conference on Computer Science and Electronic Technology (ICCSET 2014)

2014 International Conference on Computer Science and Electronic Technology (ICCSET 2014) 04 International Conference on Computer Science and Electronic Technology (ICCSET 04) Lateral Load-carrying Capacity Research of Steel Plate Bearing in Space Frame Structure Menghong Wang,a, Xueting Yang,,

More information

New Insights into History Matching via Sequential Monte Carlo

New Insights into History Matching via Sequential Monte Carlo New Insights into History Matching via Sequential Monte Carlo Associate Professor Chris Drovandi School of Mathematical Sciences ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)

More information

Probabilistic Modelling of Duration of Load Effects in Timber Structures

Probabilistic Modelling of Duration of Load Effects in Timber Structures Proailistic Modelling of Duration of Load Effects in Timer Structures Jochen Köhler Institute of Structural Engineering, Swiss Federal Institute of Technology, Zurich, Switzerland. Summary In present code

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Risk and Reliability Analysis: Theory and Applications: In Honor of Prof. Armen Der Kiureghian. Edited by P.

Risk and Reliability Analysis: Theory and Applications: In Honor of Prof. Armen Der Kiureghian. Edited by P. Appeared in: Risk and Reliability Analysis: Theory and Applications: In Honor of Prof. Armen Der Kiureghian. Edited by P. Gardoni, Springer, 2017 Reliability updating in the presence of spatial variability

More information

INTRODUCTION. 2. Characteristic curve of rain intensities. 1. Material and methods

INTRODUCTION. 2. Characteristic curve of rain intensities. 1. Material and methods Determination of dates of eginning and end of the rainy season in the northern part of Madagascar from 1979 to 1989. IZANDJI OWOWA Landry Régis Martial*ˡ, RABEHARISOA Jean Marc*, RAKOTOVAO Niry Arinavalona*,

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

DEPARTMENT OF ECONOMICS

DEPARTMENT OF ECONOMICS ISSN 089-64 ISBN 978 0 7340 405 THE UNIVERSITY OF MELBOURNE DEPARTMENT OF ECONOMICS RESEARCH PAPER NUMBER 06 January 009 Notes on the Construction of Geometric Representations of Confidence Intervals of

More information

Topological structures and phases. in U(1) gauge theory. Abstract. We show that topological properties of minimal Dirac sheets as well as of

Topological structures and phases. in U(1) gauge theory. Abstract. We show that topological properties of minimal Dirac sheets as well as of BUHEP-94-35 Decemer 1994 Topological structures and phases in U(1) gauge theory Werner Kerler a, Claudio Rei and Andreas Weer a a Fachereich Physik, Universitat Marurg, D-35032 Marurg, Germany Department

More information

arxiv: v1 [cs.fl] 24 Nov 2017

arxiv: v1 [cs.fl] 24 Nov 2017 (Biased) Majority Rule Cellular Automata Bernd Gärtner and Ahad N. Zehmakan Department of Computer Science, ETH Zurich arxiv:1711.1090v1 [cs.fl] 4 Nov 017 Astract Consider a graph G = (V, E) and a random

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

Buckling Behavior of Long Symmetrically Laminated Plates Subjected to Shear and Linearly Varying Axial Edge Loads

Buckling Behavior of Long Symmetrically Laminated Plates Subjected to Shear and Linearly Varying Axial Edge Loads NASA Technical Paper 3659 Buckling Behavior of Long Symmetrically Laminated Plates Sujected to Shear and Linearly Varying Axial Edge Loads Michael P. Nemeth Langley Research Center Hampton, Virginia National

More information

Time-Varying Spectrum Estimation of Uniformly Modulated Processes by Means of Surrogate Data and Empirical Mode Decomposition

Time-Varying Spectrum Estimation of Uniformly Modulated Processes by Means of Surrogate Data and Empirical Mode Decomposition Time-Varying Spectrum Estimation of Uniformly Modulated Processes y Means of Surrogate Data and Empirical Mode Decomposition Azadeh Moghtaderi, Patrick Flandrin, Pierre Borgnat To cite this version: Azadeh

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Bayes Nets: Sampling Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Bayes Nets: Sampling

Bayes Nets: Sampling Bayes Nets: Sampling [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Approximate Inference:

More information

Design Principles of Seismic Isolation

Design Principles of Seismic Isolation Design Principles of Seismic Isolation 3 George C. Lee and Zach Liang Multidisciplinary Center for Earthquake Engineering Research, University at Buffalo, State University of New York USA 1. Introduction

More information

Materials Science and Engineering A

Materials Science and Engineering A Materials Science and Engineering A 528 (211) 48 485 Contents lists availale at ScienceDirect Materials Science and Engineering A journal homepage: www.elsevier.com/locate/msea Study o strength and its

More information