Guideline for Offshore Structural Reliability Analysis - General 3. RELIABILITY ANALYSIS 38

Size: px
Start display at page:

Download "Guideline for Offshore Structural Reliability Analysis - General 3. RELIABILITY ANALYSIS 38"

Transcription

1 FEBRUARY 20, RELIABILITY ANALYSIS General Variables Events Event Probability The Reliability Index The Design Point Transformation of Variables Analytical Methods The First Order Reliability Method, FORM The Crude First Order Reliability Method, CRUDE-FORM The Second Order Reliability Method, SORM Simulation Methods Monte-Carlo Simulation Directional Simulation Axis-Orthogonal Simulation Choice of Method FORM CRUDE-FORM SORM Monte-Carlo Simulation Directional Simulation Axis-Orthogonal Simulation Evaluation of Analysis Results General Interpretation of Analysis Analysis Models Analysis Methods Distribution Interpretation of Calculated Reliability Measures Fulfillment of Assumptions for Analysis Conditional Probability Parametric Sensitivity Factor The Uncertainty Importance and Omission Sensitivity Factors Time Dependent Reliability Analysis Introduction First Passage Failure Probability Random Process Simulation Outcrossing Rate Application Response Surface Methods Introduction Classical Response Surface Adaptive Response Surface 75 REFERENCES 76

2 3. Reliability Analysis 3.1 General The basic problem in Structural Reliability Analysis (SRA) may be formulated as the problem of calculating the small probability that 2 g( X ) < 0 (3. 1) where X is a vector of basic variables (cfr. Section 3.1.2). g( X ) is referred to as the limit state function. In most cases in SRA, g( X ) may be split into a load term l and a strength term s such that g( X) = s( X) l( X) (3. 2) and failure is defined by the event of loads exceeding the strength, implying g( X ) < 0, which is the convention for definition of failure. The failure event may be a combination of several events Variables A stochastic model is defined through variables. The variables describe functional relationships in the physical model and the randomness of parameters in the model. A parameter of a variable may be a function of coordinates of other variables so that a network structure for dependencies between variables can be defined. Additional statistical dependence between the variables can be modeled through correlations. A variable X may be a vector of dimension n and coordinates X, i i = 1,..., n. A variable in a probabilistic model can be assigned: a numeric constant a probability distribution a function Numeric Constant The variable is a constant value, for example 0.5. A variable with this type attribute may be assigned to parameters of every other variable. It permits sensitivity factor calculations and parameter studies for parameters which enters the physical model more than one place. Probability Distribution The variable is assigned a probability distribution (e.g., the exponential distribution, the normal distribution etc.). The distribution describes the randomness of a parameter in the physical model. For further details see Chapter 4. A variable which has this type attribute may be assigned to parameters of any other variable.

3 3 Function The variable can be a user defined function of other variables (numeric constants, distributions, etc.). More advanced calculations may need variables to be defined as event probabilities (in nested reliability analysis) or generated distributions. See Tvedt (1993) for more details Events An event, E( X ), is a subset of the sample space for the stochastic process involved, i.e., a subset of all the possible outcomes of the stochastic process. An event may be defined through a functional relationship { g } E( X) = x; ( x) 0 (3. 3) The event identifies the outcomes of interest while the random variables X defines the nature of the stochastic process. The following types of events are described single event union of events (series system) intersection of events (parallel system) conditional event Single Event The purpose of the single event is to define a functional relationship which identifies a subdomain of the state space. The single event E SE ( X ) is defined by { } ESE( X) = x; g( x) θ (3. 4) where g( x ) is a function in the functions library and θ is the threshold value. The event is an inequality event which identifies a volume in the n-dimensional x-space. Alternatively the event is defined as { g } ESE( X) = x; ( x) =θ (3. 5) The event is an equality event which identifies an (n 1)-dimensional surface in the n- dimensional x-space. To be meaningful the definition of an equality event requires the definition of a limit process. If a measurement variable is assigned to the event, the limit process is on the measured value. If not, the limit process is on θ, i.e., on the threshold value. The event function is gse ( x, θ) = g( x) θ (3. 6)

4 4 In structural reliability the function g SE ( x, θ ) is denoted as a limit state function. The domain of Eq. (3. 4), excluding the boundary, is denoted the failure domain. The boundary itself is denoted the failure boundary. The domain of E C SE ( X ) is the complement of E SE ( X ) and is denoted the safe domain. A simple single event function is the load-resistance function g(,)= r l r l shown in Figure l Failure Surface FAILURE DOMAIN r SAFE DOMAIN Figure 3. 1 Load-Resistance Limit State Function Union of Events The union of m events E i ( X ) is the collection of outcomes which are in at least one subevent E i ( X ). E U ( X) = E ( X) (3. 7) =1 i m i The subevent E i ( X ) may be a single event, a union of events or an intersection of events. In structural reliability the union of single events is used to model series systems. Intersection of Events The intersection of m events E i ( X ) is the collection of outcomes in the sample space which are common to all the subevents. A special case is the intersection of single events E ( X) = E ( X) (3. 8) =1 I i m i

5 E ISE ( X) = E ( X) (3. 9) =1 i m The significance of this event is that it is the only intersection system for which the probability can be calculated directly using FORM or SORM. In structural reliability the intersection of single events is used to model parallel systems. SE, i 5 Conditional Event The conditional event E C ( X ) is the collection of outcomes in E 1 ( X ) conditioned on the occurrence of event E 2 ( X ). EC ( X) = ( E1 E2 )( X) (3. 10) The subevent E 1 ( X ) or E 2 ( X ) is either a single event, a union of single events or an intersection of single events. The complexity of the subevent depends on the calculation method used. If the FORM is used, the intersection of E 1 ( X ) and E 2 ( X ) must be an intersection of single events. In structural reliability the conditional event is used to model inspection no-find situations (inequality events) and together with the measured value variable (involves equality events) to model inspection find situations (Section 3.6) Event Probability The event probability, P E, is the probability that an outcome of the stochastic process X yields the event E, PE = P( E( X )) (3. 11) The Reliability Index The reliability index β R is defined to be the argument of the standard normal distribution which yields one minus the event probability, i.e., = Φ ( 1 P ) = Φ ( P ) (3. 12) 1 1 β R E E see Madsen et al. (1986). It is customary to distinguish between the FORM reliability index, β FORM, where P E is obtained through a FORM approximation method, the SORM reliability index, β SORM, where P E is obtained through a SORM approximation method and the Generalized reliability index, β R, where P E is the exact event probability.

6 While the event probability is most often a rapidly changing nonlinear function of the distribution parameters, θ, the reliability index β R is most often a more "linear" function of θ. This implies that the parametric sensitivity factors of β R (Section 3.5.2) most often are predictive in a wider range of θ than those of P E. 6 The other reason for using the reliability index is historically motivated. In early structural engineering applications the safety was measured in terms of an index, Cornell (1969), Hasofer and Lind (1974), Ditlevsen (1979a), corresponding to β R for special cases The Design Point In the transformed space of standard normally distributed variables, see Section 3.1.6, the design point is defined as the point on the failure surface with the smallest distance from the origin. Reference is made to Figure The design point corresponds to the most probable combination of the stochastic variables causing a failure. u 2 SAFE SET FAILURE SET u * Design Point u 1 Figure 3. 2 The Design Point Transformation of Variables This basic problem may be transformed into an equivalent problem where the basic stochastic variables are transformed into a standard-normal-space, i.e. the space of decorrelated normally distributed variables with zero mean and unit standard deviation. This transformation (Rosenblatt,1952) is

7 7 1 u1 = Φ ( F( x1))... 1 ui = Φ ( F( xi x1, x2,..., xi 1))... 1 un = Φ ( F( xn x1, x2,..., xn 1)) (3. 13) where x = ( x1, x2,..., x j,..., xn) is the basic vector and u = ( u1, u2,..., ui,... un) is the transformed vector, Φ is the standard-normal distribution. It is customary to use the term x-space for the space of random basic variables and the term u- space for the space of independent standard normal random variables. The usefulness of this mapping is due to Rotational symmetry: The probability content of domains in the u-space is invariant with the rotation of the domains about the origin u = 0. Exact results: The probability content in u-space can be obtained exactly for the linear domain, the elliptic and the hyperbolic domains, Rice (1980) and Hellstrom (1983), the parabolic domain, Tvedt (1988) and the domain obtained through a second-order Taylor expansion of the event function, Tvedt (1990). Approximations: The probability content in u-space can be obtained approximately for a convex domain bounded by linear surfaces, i.e. a domain defined by a parallel system of single events with linear event functions. The method used is the approximation method for the cumulative multinormal distribution, Hohenbichler (1984), and Gollwitzer and Rackwitz (1988). If the intersection of the single event yields a parabolic surface in directions orthogonal to the subspace spanned by the gradients of the active constraints (see below), the probability content of the corresponding domain is approximated by a split of the linear parallel system and the parabolic domain, Hohenbichler (1984). Asymptotic justification: Breitung's theorem, Breitung (1984a), states that the domain limited by a parabolic approximation to the single event boundary obtained by fitting the curvatures at the design point (the approximation point closest to the origin), asymptotically, i.e., as β = u *, yields the true probability of the event. A similar result is valid for a small intersection parallel system, Hohenbichler (1984). The rapid decay of the standard normal distribution away from the origin implies that the asymptotic approximations are useful also for moderate β, say β 2.. It follows from this that, if an event can be approximated by one of the above mentioned domains, or a composition of such domains, then an approximation of the probability content of the domain can be derived. Simulation methods: The unbiased Directional simulation method, Bjerager (1988) and the Axis-Orthogonal importance sampling method, Schall et al. (1988), require a formulation in terms of u-space variables.

8 8 3.2 Analytical Methods The First Order Reliability Method, FORM The basic idea of FORM is to approximate the system event boundary by linear surfaces at a selected set of points. The computational methods available for linear and piecewise linear event are then applied to obtain an estimate (possibly in terms of bounds) of the event probability. The selection of linearisation points depends on the system configuration. Theoretical support for the FORM is offered by Breitung (1984a) and Hohenbichler (1984). Their theorems state that the system reliability index obtained using the FORM, asymptotically (i.e., as the norm of each design point u * tends to infinity) yields the true reliability index β FORM * β, u (3. 14) R under certain restrictions on the curvatures. Single Event The principles are illustrated in Figure 3. 3 for two stochastic variables. The boundary of the single event is linearised at the point of maximum likelihood in u-space, the design point u *. u 2 g( u ) = 0 g( u ) > 0 g( u ) < 0 FORM α = * g( u ) * g( u ) u * = βα β = u * u 1 Figure 3. 3 Linearisation of single event boundary in u-space The FORM reliability index is

9 β FORM The corresponding probability of failure is = u * sign( g( 0 )) (3. 15) 9 P FORM = Φ( β ) (3. 16) FORM Intersection (Parallel System) of Single Events The term small intersection denotes an intersection of single events where the event function is nonnegative at u = 0. The linearisation of small intersection is illustrated in Figure Three limit state functions gh( u1, u2), g i( u1, u2), g j( u1, u2) define three boundary curves for the failure domain (g( u ) = 0 ). u 2 g( u ) < 0 u h * u * α j g j ( u ) = 0 α i β j g i ( u ) = 0 β i u 1 β h α h g h ( u ) = 0 Figure 3. 4 Linearisation of small intersection in u-space The steps in the linearisation procedure are: Step 1, Active constraints: Step 2, Inactive single events: The first linearisation point is the point of maximum likelihood, the design point u *, on the event boundary of the event function. The single events which have zero valued functions at this point are denoted active. The single events which are not active, are denoted inactive. The inactive single events are linearised in succession, if possible. The linearisation point of an inactive single event is the point of maximum likelihood on the part of its event

10 boundary that intersects the event defined by the previous linearisations. If a single events boundary does not intersect this event, it is deleted from the further calculations. 10 The term large intersection denotes an intersection of single events where the event function is strictly negative at u = 0. The linearisation of small intersection is illustrated in Figure Three limit state functions gh( u1, u2), g i( u1, u2), g j( u1, u2) defines three boundary curves for the failure domain (g( u ) = 0 ). u 2 g h ( u ) = 0 g j ( u ) = 0 α i g i ( u ) = 0 u i * u h * α h β i β h β j u j * α j u 1 Figure 3. 5 Linearisation of large intersection in u-space The steps in the linearisation procedure are: Step 1, Active single events: Step 2, Inactive single events: The single events are first linearised at their separate design points. The single events that have design points on the event boundary of the linearised large intersection are denoted active. The single events which are not active, are denoted inactive. The inactive single events are linearised in succession, if possible. The linearisation point of an inactive single event is the point of maximum likelihood on the part of its event boundary that intersects the event defined by the previous linearisations. If a single events boundary does not intersect this event, it is deleted from the further calculations.

11 11 Denoting the linearisation of single event no. i u = β + a u (3. 17) T g i ( ) i i the linear surfaces defines a point, β, on the cumulative multivariate normal distribution, with T mean 0, standard deviation 1 and correlations R = ( ρ ) = ( α α ). Thus ij i j P FORM = Φ( β; R ) (3. 18) where the correlation matrix is denoted R. The corresponding reliability index is β FORM = Φ 1 ( P ) (3. 19) FORM Union (Series System) of Single Events Rather than computing the union of single events, one may choose to solve the complementary problem which is an intersection of single events. The relation is m m { 0} 1 { 0} P( u; g( u) ) = P( u; g( u ) ) (3. 20) i i= 1 i= 1 i The intersection of single events is computed using the methods described above. The second alternative is to use bounds for the failure probability. Each single event is linearised separately. An upper bound (P U ) and a lower bound (P L ) to the event probability of the linearised system is computed making use of simple bounds or Ditlevsen bounds. Simple bounds: First orders bounds using the individual event probabilities (P s i' ) P L n = = max Pi (3. 21) i 1 P U n = min Pi, 1 (3. 22) i= 1 Ditlevsen bounds: Second order bounds (Ditlevsen (1979b), Kounias (1968)) that also uses the intersection probabilities P = P( E E ) ij i j n i 1 PL = P1 + max 0, Pi Pij (3. 23) i= 2 j= 1

12 i 1 { i ij j= 1 } P = P + P max P (3. 24) U 1 n i= 2 12 The bounds on the reliability index are β FORM, L = ( P U ) β FORM, U = ( P L ) Φ 1 (3. 25) Φ 1 (3. 26) The Crude First Order Reliability Method, CRUDE-FORM The CRUDE-FORM method replaces the event functions by their separate FORM linearisations prior to the computation of the event probability. The original event functions are again used with the calculation of sensitivity factors. The motivation for using the CRUDE-FORM method is that a single event which is a member of more than one subsystem is linearised only once. The computational effort using the CRUDE- FORM is thus generally less than the computational effort using the FORM. If the system is a union of single events for which the bounds option is used or if the system is one single event, the FORM and the CRUDE-FORM gives identical results. If the system is an intersection of single events, the difference in the reliability estimates using the FORM and the CRUDE-FORM could be considerable The Second Order Reliability Method, SORM The basic idea of the Second Order Reliability Method, SORM, is to approximate a single event boundary in u-space by a quadratic surface and an intersection of single events by a linear/parabolic surface. In each case the quadratic approximation is made at the design point. The computational methods available for quadratic forms and for linear and piecewise linear are then applied to obtain an estimate of the event probability (possibly in terms of bounds). The SORM for unions makes use of the SORM for single events and intersections of single events. The SORM extends significantly the class of problems that can be treated by approximation methods because even highly curved event boundaries can be well represented by such methods. Theoretical supports for the SORM are offered, in the case of a single event and a union of single events, by Breitung (1984a), and in the case of intersections of single events and unions of intersections of single events, by Hohenbichler (1984). Their theorems state that the event probability obtained using the SORM, asymptotically, i.e. as the norm of each design point u * tends to infinity, under certain restrictions on the curvatures, yields the true event probability. Below, only the essential differences from the FORM are included.

13 13 Single Event Three methods for approximating the boundary of the single event are: Parabolic approximation of the main curvatures of the boundary at u *. The exact result for the probability of the parabolic domain is available, Tvedt (1988) and (1990), together with an asymptotic result by Breitung (1984a). Approximation derived from the second order Taylor expansion of the event function at u *. The event is either a parabolic quadratic form, an elliptic quadratic form or a hyperbolic quadratic form, Fiessler et al. (1979). The exact result for the quadratic form is available, Rice (1980), Hellstrom (1983), Tvedt (1990). Parabolic approximation derived from the diagonal of G( u * * ). G( u) is the matrix of the second derivatives of the event function evaluated at the design point u *. The SORM event probability is computed using the exact results for the parabolic quadratic form, Tvedt (1989). The approximation is a modified version of a suggestion by Der Kiureghian et al. (1987). Intersection (Parallel System) of Single Events Small Intersection: The SORM approximation is here used to derive a correction factor to the FORM probability of failure, Hohenbichler (1984). The intersection of the active constraints describes a nonlinear surface. This surface is approximated by a parabolic surface. The SORM probability of failure is in this case an asymptotic result for the parabolic/linear failure domain P SORM = P FORM QSORM Φ( β) (3. 27) where Q SORM is the separate probability of the parabolic quadratic form and β is the design point reliability index. Large Intersection: The single events are linearised separately. The linearised version of a single event is as in FORM, however, in SORM the distance parameter is scaled to account for the second order probability. 3.3 Simulation Methods Monte-Carlo Simulation The Monte-Carlo simulation method samples from the joint distribution of the n random basic variables X. The indicator function I( x)

14 14 Ix ( ) 1for g( x) 0 = 0 for g( x) > 0 (3. 28) is evaluated at each sampled point and the probability is estimated as N 1 PE = Ix ( i) N i= 1 (3. 29) An advantage of the method is that it makes use of point values of the event function only. Thus, the event function is not required to be a smooth function of its variables. Also, the estimate on the event probability is unbiased. The disadvantage is the long computational time for small probabilities (see Section 3.4.4). An illustration of the Monte-Carlo method is given in Figure x j g( x ) > 0 g( x ) < x i Figure 3. 6 The Monte-Carlo Method Directional Simulation The idea of directional simulation was put forward by Deak (1980) who used it to sample the multinormal cumulative probability function. The method is a conditional expectation simulation method that samples directions uniformly distributed on the surface of the n-dimensional unit sphere centered at the origin of the u-space, Figure 3. 7.

15 15 u j g( u ) < 0 g( u ) > 0 r χ q ( r ) f A ( a) a u i Ω q Figure 3. 7 Directional Simulation in u-space The method assembles the contributions to the probability integral conditioned on the sampled directions. A number of variance reducing methods, i.e., sampling of random orthogonal sets of directions were proposed. Application of the method to general systems with nonlinear event boundaries was suggested by Bjerager (1988). The motivation for the method is that in many cases it allows unbiased and efficient sampling of small probabilities, provided that n is not too large. The probability of the event can be formulated as P = f () v dvdω (3. 30) E χ n 2 gva ( ) 0 where v is a χ 2 n -distributed random variable, a is a unit vector and dω is the surface element of the n-dimensional unit sphere. The unit directions a are sampled and the function χ 2 n ( v) conditioned on a is integrated. This may be done by identifying the upper bounds v( u i ) and the lower bounds v( l i ) of the intervals where g( va ) < 0 and to sum up the contributions to the integral

16 m { 2 n i χ 2 n i } P() a = χ (( v u)) (()) v l i= 1 16 (3. 31) assuming there are m such intervals. The estimator for the probability is: N 1 PE = P( a i) (3. 32) N i= 1 where a i, i = 1,.., N are the simulated directions. The simplest method that reduces the variance of the sampled set is to replace P( a ) by 1 P1 ( a) = ( P( a) P( a) ) (3. 33) 2 A further reduction of the variance of the sampled set is obtained through sampling of a randomly oriented orthonormal system of vectors b1,.., bn. A new system of vectors is formed as n! follows: To each set { bi 1, bi2,.., bik} of k out of n vectors, there are m = such sets, the k!( n k)! following 2 k vectors, ai ( b), are formed with s =±, j = i, i,..., i. One point is sampled as j k a ( b ) = 1 i ( bi bi kbik) k s + s + + s (3. 34) O = 1 m m P( a( b )) (3. 35) { ss ; j =±, j=,,..., k} k k i 2 i= The estimator of the probability based on sampling of N orthonormal systems of vectors, is now N Ek, = Ok, j N j= 1 P 1 (3. 36) Taking advantage of symmetry, the number of actual calculations is half the amount indicated by the above methods. Thus r = n! k k!( n k)! 2 1 (3. 37)

17 The number of evaluations of the event function is thus approximately rn a, where n a is the average number of event function calls needed to perform the summation of Eq. (3. 31). The following estimators are recommended dependent on the number of stochastic variables n: 17 n 2 or 16 n 50 Use P E 1 ( a), n= 3 or 11 n 15 Use P E 2 ( a ) (3. 38) 4 n 10 Use P E, 3 ( a) n > 50 Use P 1 ( a), Axis-Orthogonal Simulation Importance sampling around the design point has been suggested by Shinozuka (1983) and also by Harbitz (1983). The recommended method is, however, based on the ideas of Hohenbichler and Rackwitz (1988) and Schall et al. (1988). The idea is to define an axis for a small intersection domain, i.e., an intersection of single events where the u-space origin is in the complementary event, and to define the sampling density in a plane orthogonal to this axis. The method assumes that the true boundary of the event is suitably approximated by a set of linear surfaces obtained using the FORM linearisation procedure for a small intersection domain. The idea is to calculate the probability of the linearised domain by methods available for the cumulative multinormal integral and to obtain an estimate on the probability of the difference of the true event and the linearised event using the Axis-Orthogonal sampling method. The principles are illustrated in Figure Φ( u n ) g( u ) < 0 g( u ) > 0 u * u n v i v

18 Figure 3. 8 Axis Orthogonal Simulation in u-space 18 Selection of Axis Let a i, i = 12,,..., n be the set of normalised gradients of the single events active at the design point of the parallel system. The axis is the direction of the averaged gradient, pointing into the interior of the linearised event at the design point. a a i = = i 1 i i= 1 a a i i (3. 39) The coordinate system is rotated so that the new coordinate u n is in the direction of the axis while u ~ = ( u 1, u 2,..., u n 1 ) is the coordinates normal to u n. The axis is now defined as a= u0 + u n e n (3. 40) where e n is a unit vector in the direction of u n. Sampling of a Multiplicative Correction Factor The sampling density, H M, is in this case as follows: a) In the space spanned by the gradients of the m single events active at the design point, the m-dimensional standard normal density, conditioned on the linearized event, is used, Bjerager (1988). b) In the directions normal to this space, and normal to the axis, n m 1 independent standard normal random variables centered on the axis are sampled from. Thus P E = P L g( u) P ( u ~ ) H du~ = PC A M L ( un( ~ (3. 41) 0 Φ u) ) where P L is the probability of the linearised domain, u n ( u ~ ) is the intersection of the linearised domain and the line l = u ~ + un e n (3. 42)

19 19 P A ( u ~ ) is the true probability of the event conditioned on the line defined by l. The estimator of C is N 1 P ( ~ A ui) C = N Φ( u ( u ~ )) i= 1 n i (3. 43) Sampling of an Additive Correction The density function sampled from is in this case the (n 1)-dimensional standard normal density function φ( ~ ν ) with u ~ = u ~ + ~ 0 ν, i.e., with center on the axis a. The linearised domain is bounded by the linear approximations of the single events active at the design point and the linear T approximations of the constraints having a positive a i a. The difference of the probability of the true event and the probability of the linear approximation of the event is estimated. Thus PE = PL + { PA( ~ ) ( un( ~ φ( u } ~ ) u Φ u)) d PL + A ( ~ u~ = (3. 44) φν) g( u) 0 with u ~ = u ~ + ~ 0 ν, and the other symbols explained above. The estimator of A is: N 1 ( ~ ) ( ( ~ φ( ~ ) A = PA un )) N u u ( ~ ui i Φ i φ ν ) i= 1 i (3. 45) Computation of P A ( u ~ ) The line integral P ( u ~ ) = φ ( u ) du ( u ~, ) 0 A n n g u n (3. 46) is evaluated once for each simulation of ~ u. Because this is the time-consuming part of the calculations, a number of options are made available to reduce the effort in the simple cases. The general integration method is to search for the points un( il ) and un( iu) denoting the lower and upper bounds of the intervals where g( u ~, u n ) is less than zero, and then sum up the contributions using the formula m { Φ } PA( u ~ ) = Φ( un( iu)) ( un( il )) i= 1 (3. 47) assuming there are m such intervals.

20 Choice of Method The applicability of a particular method depends, as in all mathematical modelling, on the problem at hand and on the objective of the analysis. In the following some important characteristics of the methods are tabulated to assist the user to choose the method appropriate to the problem at hand. The following issues are important to the selection of a method: The objective of the analysis. The number of random variables involved. The computational cost of evaluating the event (e.g. limit state) function. The properties of the event function, e.g. existence, continuity and differentiability. The reliability level of interest FORM The motivations for using the FORM are: Almost linear event boundary. It is often found that the distributions used to model random basic variables are narrow, i.e., the coefficients of variation have small values. In such cases, the event boundary in u-space is often significantly less curved than the event boundary in x-space. Because of this, the desired results can be obtained by the FORM with a quality sufficient for the intended application. Comparison of probabilistic models. An interesting application of probabilistic methods is to compare probabilistic models under variation of the model assumptions. In such cases the accuracy required is one that permits comparison rather than exact results. Computational efficiency. In many applications in engineering, the calculation of a variable may require computationally costly subroutines, e.g., the value of the variable may require use of the finite element method. In such cases, the use of other methods, e.g., simulation methods, may be computationally prohibitive for small failure probabilities. Systems: Event function: Results: The method is applicable to single events, union and intersections of single events and union of intersections of single events. In principle the event function must be continuously differentiable. However, in practical applications it may be sufficient that this is the case near the design point. Probability, parametric sensitivity factors and uncertainty importance factors.

21 Applicability: The method is particularly useful for the high reliability problems often encountered in engineering. The method assumes that each single event boundary is linear or nearly linear in the neighborhood of the approximation points. The method is also quite useful if qualitative statements about the model are sufficient, e.g., if the results are used to compare models. 21 Check: Computation cost: If the applicability of the FORM to a specific model is uncertain, the FORM results should be compared for selected cases with the results obtained using simulation or SORM. In many cases a comparison with SORM results suffices. The cost of a FORM calculation is essentially the cost of solving a number of optimisation problems, one for each subsystem in the model. Assume that the system is composed of m subsystems, that subsystem no. i is composed of k i single events and that single event no. j of subsystem no. i to have n ij random variables. If solving the optimisation problem of subsystem no. i requires r i iterations, then the computational effort is of k i m i ij + i= 1 j= 1 the order r ( n 1). This implies that the solution time is not a function of the probability. It is linear in number of stochastic variables and events. n ij +1 refers to the calculation of gradients g( x ). In some cases gradients may be derived analytically or by efficient numerical algorithms (e.g. Adjoint method in FEM). In such cases FORM will be particularly efficient CRUDE-FORM The CRUDE-FORM linearises each single event boundary separately prior to the probability calculation. Then the FORM is applied to the linearised system. The applicability of the method is as follows: Systems: Event function: Results: Applicability: The method is applicable to single events, union and intersections of single events and union of intersections of single events. In principle the event function must be continuously differentiable. However, in practical applications it may be sufficient that this is the case near the design point only. Probability, parametric sensitivity factors and uncertainty importance factors. The method is identical to the FORM for single event and union of single events systems (if bounds are used). For intersection of single events and union of intersections systems, the method assumes that the boundaries of the event are well approximated by the independent linearisation procedure for the single events. Thus, care should be exercised when using this method. Note that the computational cost is in the order of a FORM computation.

22 22 Check: Computation cost: If the applicability of the CRUDE-FORM to a specific model is uncertain, the CRUDE-FORM results should be compared with the results obtained using FORM. The cost of a CRUDE-FORM calculation is essentially the cost of computing the approximation point of each single event separately. Assume there are m single events, that single event no. i has n i random variables and that the computation of the approximation point needs r i iterations. Then the computational effort is of the order ri( ni + 1) m i= SORM In the case of a single event, the SORM method fits a quadratic surface to the boundary of the event. In case the system comprises more than one single event, the parabolic SORM is used. The SORM is used to compute a correction factor to the FORM. The applicability of the method is as follows: Systems: Event function: Results: Applicability: Check: Computation cost: The method is applicable to single events, union and intersections of single events and union of intersections of single events. In principle the event function must be two times continuously differentiable. However, in practical applications it may be sufficient that this is the case near the design point. Probability, parametric sensitivity factors and uncertainty importance factors. The method assumes that the boundary of the event is closely approximated by a quadratic surface in the neighbourhood of the approximation point. The invariance property of the parabolic quadratic form makes it generally applicable. The diagonal fit is generally applicable if the event function can be written as g( x ) = h( x ) and the random variables X i are mutually independent. It is also applicable if the curvatures of a full parabolic approximation have small values. The lack of invariance implies that care has to be exercised whenever the second order Taylor expansion is used. If the applicability of the SORM to a specific model is uncertain, the SORM results should for selected cases be compared with the results obtained using simulation. The cost of the SORM is the cost of the FORM plus the cost of deriving the quadratic approximation. If the event function has n random variables, the number of extra computations is: n i= 1 i i

23 23 Parabolic: Second order Taylor: Diagonal: n ( n 1) 2 n ( n + 1) (3. 48) 2 2n If n is large, say 100, the simulation methods become competitive also for small probabilities, say For small intersection: If there are k active single events in the intersection system, the number of computations of the event functions to establish the parabolic approximation is k ( n k )( n k 1). For large intersection: number of single events 2 single event SORM Monte-Carlo Simulation The Monte Carlo simulation method samples from the joint distribution of the random basic variables. Systems: Event function: Results: Applicability: Computation cost: All systems except systems involving equality events. The calculation of the event function should be possible at all valid realizations of the basic variables, i.e. all the points where the joint density distribution has a positive value. If the event function cannot be calculated at some point x, then the point contributes zero to the probability. Probability. (Sensitivities may be calculated at extra costs.) The Monte Carlo simulation method gives an unbiased estimate of the probability. The method is efficient for the simulation of the central part of the distribution, i.e., probabilities in the range from 0.1 to 0.9, say. The number of points (evaluations of the g-function) required to obtain a reasonable reliable estimate of the probability P E is approximately 100/P E Directional Simulation The directional simulation method samples directions uniformly distributed on the surface of the n-dimensional unit sphere in u-space and accumulates a weighted sum of the event probabilities conditional on the simulated directions. Systems: Event function: All systems. The event function should be calculable at each point where the density distribution of the random basic variables X is positive.

24 24 Results: Applicability: Computation cost: Probability and parametric sensitivity factors. The directional simulation method gives an unbiased estimate of the probability. The method is recommended for the simulation of probabilities below 0.1 and above 0.9. The computational effort is proportional to the proportion of the area of the surface of the n-dimensional unit sphere that generates directions intersecting the event. As this proportion tends to be reduced with increasing number of random variables, the Monte Carlo method becomes competitive if the number of random variables is large. The Monte Carlo simulation method should be used for large n, n 100, say Axis-Orthogonal Simulation The Axis-Orthogonal simulation is an importance sampling method that samples a correction to a FORM approximation of a single event or an intersection of single events. The number of samples to obtain the correction is small if the intersection system is closely approximated by the FORM. Systems: Event function: Results: Applicability: Computation cost: Single events and intersection of single events (small intersection only). The event functions should be calculable at each point u. If the event function cannot be calculated at some point u, it is assigned the value zero at u. Probability, reliability index. The Axis-Orthogonal simulation method gives an unbiased estimate of the probability in the case there is one local design point only. The method is recommended for the simulation of probabilities smaller than 0.1 and greater than 0.9. The computational effort depends on how well the event boundary is approximated by a (set) of linear surfaces. If well approximated, the number of samples may be down to Evaluation of Analysis Results General The basic results of a probabilistic analysis are a set of design point values of the random values the probability of an event the conditional probability of an event

25 parametric sensitivity factors uncertainty importance factors. Results which are specific for time dependent reliability analysis, i.e., crossing rate and crossing probability, are described in Section 3.6. The event probability and the reliability index are described in Sections and Interpretation of Analysis Reliability analysis should be performed and interpreted with a critical mind. The analysis results should be interpreted critically as the results and corresponding conclusions will significantly depend on analysis model and associated bias distribution types target reliability level An essential result of a structural reliability analysis is the reliability measure. The measure may depend on a number of assumptions adopted by the analyst. The reliability measures shall be interpreted as an engineering factor expressing the current knowledge/information about the structure and its environment under the types assumptions set forth in the analysis (for example assumptions regarding probability distribution types and the structural models adopted). The reliability measure introduces thereby an ordering of the considered structures with respect to reliability that can be used in comparative studies. The reliability measure is often taken as the reliability index defined as a monotone function of the failure probability. There are several reasons for using the reliability index instead of probability directly; the index has a historical significance, it has a clear geometrical interpretation in a number of basic cases, it is an essential component in first order reliability methods and it has certain simple relationships to common partial safety factors. Additional results which may be obtained from the reliability analysis are: sensitivity factors (parameter sensitivities, importance factors) estimation point (most likely failure point). As a guideline for a critical interpretation of analysis the following items are listed: It is important that the selected analysis model reflects the actual failure mode in an appropriate way. The selected distributions should be representative for the considered analysis. It should be assured that assumptions made for reliability analysis of problems in design and in-service life are fulfilled through performance of, e.g., fabrication or in-service inspection. The analysis method should give correct results based on the limit state function and the distributions given as input to the analysis. Calculated probabilities of failure are nominal values. The use of relative values from reliability analysis are preferred above that of absolute values Analysis Models

26 26 Analysis models that reflect the actual failure mode should be selected. Wherever possible, bias and model uncertainty should be estimated and included in the analysis models. Bias and model uncertainty may for example be obtained from comparison of experimental and theoretical values. Bias and model uncertainty can be represented by a model uncertainty factor defined as the ratio between experimental (true) values and theoretical (predicted) values. Bias is defined as the expected estimation error and will therefore be reflected in terms of a non-unity mean of the model uncertainty factor. Variability in the model will be reflected in terms of the standard deviation of the model uncertainty factor Analysis Methods It should be checked that a solution of the limit state function is achieved with sufficient accuracy. This may be checked using the data for the estimation point calculated. For special problems (e.g., if the limit state function is not monotonic in all basic variables, or if one or more distributions are bimodal) it should be checked that a right estimation point is obtained. This may be checked using alternative starting points for the optimisation algorithm for finding the most likely failure point. For problems with limit state surfaces possessing several local minima (of similar magnitude) special care is needed in estimating P F, and FORM/SORM results should be verified by simulation, e.g., by directional simulation. FORM/SORM results are generally highly accurate for practical purposes: i.e., for the small range probabilities, say P F < 10 3, and smooth limit state surface typical of most problems. However they do not directly yield results indicating the accuracy of the reliability estimates. Confidence intervals for the reliability measures may be established by simulation, in particular importance sampling (using the FORM estimation point as the center of the sampling directly) may be efficient for this purpose. If it is difficult to achieve a solution for a limit state function, a reformulation should be considered. Use of logarithms may have a "linearising effect" on the limit state function which may improve FORM/SORM solutions. This is achieved by rewriting the limit state function from to g( x) = R( x) S( x) (3. 49) g( x ) = ln R(x) ln S( x) (3. 50) Distribution It should be checked that the distributions selected for the analysis are representative. Whenever the type of distribution is uncertain, another analysis with a more conservative distribution (a sensitivity study) may be performed for comparison. It is important to have a realistic distribution for the parameters contributing most to the failure probability. The importance of a distribution may be dependent on the estimation point which again is dependent on the reliability level. For example the tails of the distributions becomes more important as the reliability level is increased. An evaluation of appropriate distributions should therefore be performed at a relevant reliability level. Both the distribution type and the parameters describing the distribution should be considered.

27 Interpretation of Calculated Reliability Measures It should be kept in mind that is is difficult to work with absolute values of probability of failure as indicated in Chapter 2 of this Guideline. The calculated reliability measures depend on subjective assessments by the analysts and, as such, should be considered as nominal values. Whenever possible, it is recommended to base conclusions on relative values or comparisons with results from alternative analyses of "well known cases" as recommended in Chapter Fulfillment of Assumptions for Analysis The calculated reliabilities will be dependent on the assumptions made for the analysis. It is important that these assumptions are fulfilled when going from theoretical considerations on safety to that of design, fabrication, and in-service life. I.e. it is important that assumptions on material properties, fabrication tolerances, probability of detecting defects etc. are fulfilled in order to obtain the target safety level for the built structure. This can be achieved through a proper development of specifications for design and fabrication and through a proper development of procedure for in-service inspection Conditional Probability In many cases it is of interest to compute the probability that an event occurs conditioned on the occurrence of another event. The event that a structure fails within a specified number of years may, for example, be conditioned on the detection or nondetection of a number of cracks of given size. This is useful for updating of failure probabilities. If the problem at hand involves no single equality events, then the law of conditional probabilities is used. The probability of event A conditioned on event B is P EA E = B P EA EB P EB (3. 51) If the event E B involves the single equality event, SE, as E = E~ { ; G ( ) = } B B SE x x 0 then P EA EB = PE ( A E~ G B SE + θ 0) θ PE ( ~ G B SE + θ 0) θ = 0 θ θ = 0 (3. 52) where θ is an artificial zero-valued variable added to the single equality event function. If the event E B involves the single equality event SE as a measured value of some quantity, e.g., a measured crack size, then

28 P EA EB PE ( A E~ G B SE( θ) 0) = θ PE ( ~ G B SE( θ) 0) θ = 0 θ θ = 0 28 (3. 53) where θ is an artificial zero-valued variable added to the measured value. Note that most structural reliability textbooks, including those referenced in this document, contain an error (relative to Eq. (3. 53)) in the formulation of P E A E for the case that the single equality event SE B is defined as a measured value of some quantity such as a crack size. One of the most useful applications of SRA is for inspection, maintenance planning, remaining life assessment and life extension. For this purpose it is necessary to calculate conditional probabilities, e.g., (1) probability of fatigue failure conditioned on a crack of a given uncertain size found, or (2) probability of fatigue failure given that no cracks were found in a number of inspections with a certain inspection method. In the first case it is required to calculate Pg ( ( x) < 0 h( x, ad + θ) = 0) Pg ( ( x) < 0 h( x, ad ) = 0) = θ Ph ( ( x, ad + θ) = 0) θ = 0 θ θ = 0 (3. 54) in which h( x, a D + θ ) is the event margin corresponding to detection of a crack a D, where a D may be stochastic due to measurement uncertainties, and g( x ) is the limit state function corresponding to failure. This means that the probabilities can be calculated as the partial sensitivity factors of a parallel system with respect to the observed quantity a D, the detected crack size, which can be a uncertain quantity. In the second case it is required to calculate Pg ( ( x) < 0 h( x) > 0) = P( g( x) < 0 h( x) > 0) Ph ( ( x) > 0) (3. 55) This means that the conditional probability can be calculated by any software that can handle parallel systems, if due considerations are given to the correlation between the variables in the failure event formulation g( x ) < 0 and the inspection event formulation h( x ) > 0. Example 3.1: Example Based on Linear Elastic Fracture Mechanics The motivation for describing the fatigue process as stochastic may be illustrated by Figure 3. 9, where the crack depth in identical specimens as a function of the number of load cycles are plotted. The tests are from the same laboratory, with identical initial crack geometries. In addition the loads, response analysis, local stress analysis, initial crack size are stochastic in a realistic application.

29 29 Figure 3. 9 Crack depth as a function of time for identical specimens In linear elastic fracture mechanics, the increase in a crack depth a in a load cycle is described by da = dn C ( K ) m (3. 56) where ( C, m ) are material parameters estimated from laboratory tests. K is the stress intensity factor usually expressed as K = σ π ay( a) (3. 57) where σ is the far-field stress and Y( a) is the geometry function accounting for the stress concentration due to the presence of the crack, and local geometry effects. This may be reformulated as a limit state function as g( x ) = a c a0 da ( πay( a)) m C i σ i (3. 58) if cycle ordering effects can be neglected. The stochastic variables in a typical case would be represented by x = ( a0, ac, Y, C, m, σ ), the initial crack distribution, critical crack distribution, local geometrical effects, material parameters and far field stress. The calculation of the far field stress may be broken down to a model of random variables representing environmental and other loads, response analysis and local stress analysis. For this illustration we will not focus on these further details, reference is given to Skjong and Torhaug (1991). The inspection event that a crack is found with an uncertain crack size a D, due to the sizing inaccuracy may be formulated as the event

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

However, reliability analysis is not limited to calculation of the probability of failure.

However, reliability analysis is not limited to calculation of the probability of failure. Probabilistic Analysis probabilistic analysis methods, including the first and second-order reliability methods, Monte Carlo simulation, Importance sampling, Latin Hypercube sampling, and stochastic expansions

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

Sensitivity and Reliability Analysis of Nonlinear Frame Structures

Sensitivity and Reliability Analysis of Nonlinear Frame Structures Sensitivity and Reliability Analysis of Nonlinear Frame Structures Michael H. Scott Associate Professor School of Civil and Construction Engineering Applied Mathematics and Computation Seminar April 8,

More information

A general procedure for rst/second-order reliability method (FORM/SORM)

A general procedure for rst/second-order reliability method (FORM/SORM) Structural Safety 21 (1999) 95±112 www.elsevier.nl/locate/strusafe A general procedure for rst/second-order reliability method (FORM/SORM) Yan-Gang Zhao*, Tetsuro Ono Department of Architecture, Nagoya

More information

EFFICIENT MODELS FOR WIND TURBINE EXTREME LOADS USING INVERSE RELIABILITY

EFFICIENT MODELS FOR WIND TURBINE EXTREME LOADS USING INVERSE RELIABILITY Published in Proceedings of the L00 (Response of Structures to Extreme Loading) Conference, Toronto, August 00. EFFICIENT MODELS FOR WIND TURBINE ETREME LOADS USING INVERSE RELIABILITY K. Saranyasoontorn

More information

CALCULATION OF A SHEET PILE WALL RELIABILITY INDEX IN ULTIMATE AND SERVICEABILITY LIMIT STATES

CALCULATION OF A SHEET PILE WALL RELIABILITY INDEX IN ULTIMATE AND SERVICEABILITY LIMIT STATES Studia Geotechnica et Mechanica, Vol. XXXII, No. 2, 2010 CALCULATION OF A SHEET PILE WALL RELIABILITY INDEX IN ULTIMATE AND SERVICEABILITY LIMIT STATES JERZY BAUER Institute of Mining, Wrocław University

More information

Multivariate Distribution Models

Multivariate Distribution Models Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is

More information

THIRD-MOMENT STANDARDIZATION FOR STRUCTURAL RELIABILITY ANALYSIS

THIRD-MOMENT STANDARDIZATION FOR STRUCTURAL RELIABILITY ANALYSIS THIRD-MOMENT STANDARDIZATION FOR STRUCTURAL RELIABILITY ANALYSIS By Yan-Gang Zhao and Tetsuro Ono ABSTRACT: First- and second-order reliability methods are generally considered to be among the most useful

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,

More information

GOAL-BASED NEW SHIP CONSTRUCTION STANDARDS General principles for structural standards MSC 80/6/6

GOAL-BASED NEW SHIP CONSTRUCTION STANDARDS General principles for structural standards MSC 80/6/6 GOAL-BASED NEW SHIP CONSTRUCTION STANDARDS General principles for structural standards MSC 80/6/6 Rolf Skjong, Dr Rolf.Skjong@dnv.com IMO MSC 80, Lunch Presentation May 12th, 2005 1 Goal Based Construction

More information

Stochastic optimization - how to improve computational efficiency?

Stochastic optimization - how to improve computational efficiency? Stochastic optimization - how to improve computational efficiency? Christian Bucher Center of Mechanics and Structural Dynamics Vienna University of Technology & DYNARDO GmbH, Vienna Presentation at Czech

More information

MODIFIED MONTE CARLO WITH LATIN HYPERCUBE METHOD

MODIFIED MONTE CARLO WITH LATIN HYPERCUBE METHOD MODIFIED MONTE CARLO WITH LATIN HYPERCUBE METHOD Latin hypercube sampling (LHS) was introduced by McKay, Conover and Beckman as a solution to increase the efficiency of computer simulations. This technique

More information

Module 8. Lecture 5: Reliability analysis

Module 8. Lecture 5: Reliability analysis Lecture 5: Reliability analysis Reliability It is defined as the probability of non-failure, p s, at which the resistance of the system exceeds the load; where P() denotes the probability. The failure

More information

Figure The different sources of statistical uncertainties

Figure The different sources of statistical uncertainties 3.4 Uncertainties in the treatment of statistical data and influences on the structural reliability assessment * 3.4. Introduction Structural reliability methods allow us to account for the uncertain part

More information

A probabilistic method to predict fatigue crack initiation

A probabilistic method to predict fatigue crack initiation International Journal of Fracture (2006) 137:9 17 DOI 10.1007/s10704-005-3074-0 Springer 2006 A probabilistic method to predict fatigue crack initiation SALIL. S. KULKARNI, L. SUN, B. MORAN, S. KRISHNASWAMY

More information

Effects of Error, Variability, Testing and Safety Factors on Aircraft Safety

Effects of Error, Variability, Testing and Safety Factors on Aircraft Safety Effects of Error, Variability, Testing and Safety Factors on Aircraft Safety E. Acar *, A. Kale ** and R.T. Haftka Department of Mechanical and Aerospace Engineering University of Florida, Gainesville,

More information

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Haoyu Wang * and Nam H. Kim University of Florida, Gainesville, FL 32611 Yoon-Jun Kim Caterpillar Inc., Peoria, IL 61656

More information

Reliability of Acceptance Criteria in Nonlinear Response History Analysis of Tall Buildings

Reliability of Acceptance Criteria in Nonlinear Response History Analysis of Tall Buildings Reliability of Acceptance Criteria in Nonlinear Response History Analysis of Tall Buildings M.M. Talaat, PhD, PE Senior Staff - Simpson Gumpertz & Heger Inc Adjunct Assistant Professor - Cairo University

More information

Reliability analysis of geotechnical risks

Reliability analysis of geotechnical risks Reliability analysis of geotechnical risks Lazhar Belabed*, Hacene Benyaghla* * Department of Civil Engineering and Hydraulics, University of Guelma, Algeria Abstract The evaluation of safety or reliability

More information

A Simple Third-Moment Method for Structural Reliability

A Simple Third-Moment Method for Structural Reliability A Simple Third-Moment Method for Structural Reliability Yan-Gang Zhao* 1, Zhao-Hui Lu 2 and Tetsuro Ono 3 1 Associate Professor, Nagoya Institute of Technology, Japan 2 Graduate Student, Nagoya Institute

More information

A Study of System Reliability Analysis Using Linear Programming

A Study of System Reliability Analysis Using Linear Programming A Study of System Reliability Analysis Using Linear Programming Yi Chang* 1 and Yasuhiro Mori 2 1 Graduate Student, Department of Environmental Engineering and Architecture, Nagoya University, Japan 2

More information

Value of Information Analysis with Structural Reliability Methods

Value of Information Analysis with Structural Reliability Methods Accepted for publication in Structural Safety, special issue in the honor of Prof. Wilson Tang August 2013 Value of Information Analysis with Structural Reliability Methods Daniel Straub Engineering Risk

More information

Reliability assessment of cutting tools life based on advanced approximation methods

Reliability assessment of cutting tools life based on advanced approximation methods Reliability assessment of cutting tools life based on advanced approximation methods K. Salonitis 1*, A. Kolios 2 1 Cranfield University, Manufacturing and Materials Department 2 Cranfield University,

More information

Probabilistic elastic-plastic fracture analysis of circumferentially cracked pipes with finite-length surface flaws

Probabilistic elastic-plastic fracture analysis of circumferentially cracked pipes with finite-length surface flaws Nuclear Engineering and Design 195 (2000) 239 260 www.elsevier.com/locate/nucengdes Probabilistic elastic-plastic fracture analysis of circumferentially cracked pipes with finite-length surface flaws Sharif

More information

Modelling Under Risk and Uncertainty

Modelling Under Risk and Uncertainty Modelling Under Risk and Uncertainty An Introduction to Statistical, Phenomenological and Computational Methods Etienne de Rocquigny Ecole Centrale Paris, Universite Paris-Saclay, France WILEY A John Wiley

More information

Created by Erik Kostandyan, v4 January 15, 2017

Created by Erik Kostandyan, v4 January 15, 2017 MATLAB Functions for the First, Second and Inverse First Order Reliability Methods Copyrighted by Erik Kostandyan, Contact: erik.kostandyan.reliability@gmail.com Contents Description... References:...

More information

Basics of Uncertainty Analysis

Basics of Uncertainty Analysis Basics of Uncertainty Analysis Chapter Six Basics of Uncertainty Analysis 6.1 Introduction As shown in Fig. 6.1, analysis models are used to predict the performances or behaviors of a product under design.

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

Probabilistic design tools and applications

Probabilistic design tools and applications CHAPTER 5 Probabilistic design tools and applications 5.1 INTRODUCTION In this chapter the probabilistic design tools will be presented. This encompasses the methods to combine the inherent uncertainty

More information

Research Collection. Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013.

Research Collection. Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013. Research Collection Presentation Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013 Author(s): Sudret, Bruno Publication Date: 2013 Permanent Link:

More information

M E M O R A N D U M. Faculty Senate approved November 1, 2018

M E M O R A N D U M. Faculty Senate approved November 1, 2018 M E M O R A N D U M Faculty Senate approved November 1, 2018 TO: FROM: Deans and Chairs Becky Bitter, Sr. Assistant Registrar DATE: October 23, 2018 SUBJECT: Minor Change Bulletin No. 5 The courses listed

More information

Polynomial chaos expansions for structural reliability analysis

Polynomial chaos expansions for structural reliability analysis DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for structural reliability analysis B. Sudret & S. Marelli Incl.

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

PREDICTING THE PROBABILITY OF FAILURE OF GAS PIPELINES INCLUDING INSPECTION AND REPAIR PROCEDURES

PREDICTING THE PROBABILITY OF FAILURE OF GAS PIPELINES INCLUDING INSPECTION AND REPAIR PROCEDURES REDICTING THE ROBABILITY OF FAILURE OF GAS IELINES INCLUDING INSECTION AND REAIR ROCEDURES Zhang, L. 1, Adey, R.A. 2 1 C M BEASY Ltd, Ashurst Lodge, Lyndhurst Road, Southampton, SO40 7AA, UK, lzhang@beasy.com

More information

SEISMIC RELIABILITY ANALYSIS OF BASE-ISOLATED BUILDINGS

SEISMIC RELIABILITY ANALYSIS OF BASE-ISOLATED BUILDINGS International Symposium on Engineering under Uncertainty: Safety Assessment and Management January 4 to 6, 2012 Paper No.: CNP 070 SEISMIC RELIABILITY ANALYSIS OF BASE-ISOLATED BUILDINGS M.C. Jacob 1,

More information

Estimating Load-Sharing Properties in a Dynamic Reliability System. Paul Kvam, Georgia Tech Edsel A. Peña, University of South Carolina

Estimating Load-Sharing Properties in a Dynamic Reliability System. Paul Kvam, Georgia Tech Edsel A. Peña, University of South Carolina Estimating Load-Sharing Properties in a Dynamic Reliability System Paul Kvam, Georgia Tech Edsel A. Peña, University of South Carolina Modeling Dependence Between Components Most reliability methods are

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

THEORETICAL PART (10 p)

THEORETICAL PART (10 p) Solutions to the written exam (004-04-17) in Ship structures Basic course Solutions to the written exam 004-04-17 in Ship structures Basic course THEORETICAL PART (10 p) Question 1 (.5 p) Somewhere between

More information

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS*

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS* SIAM J APPL MATH Vol 35, No 3, November 1978 1978 Society for Industrial and Applied Mathematics 0036-1399/78/3503-0002 $0100/0 THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES 9 th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES Nam H. Kim and Haoyu Wang University

More information

MODIFIED MONTE CARLO WITH IMPORTANCE SAMPLING METHOD

MODIFIED MONTE CARLO WITH IMPORTANCE SAMPLING METHOD MODIFIED MONTE CARLO WITH IMPORTANCE SAMPLING METHOD Monte Carlo simulation methods apply a random sampling and modifications can be made of this method is by using variance reduction techniques (VRT).

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Lecture No. # 33 Probability Models using Gamma and Extreme Value

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Extreme value distributions for nonlinear transformations of vector Gaussian processes

Extreme value distributions for nonlinear transformations of vector Gaussian processes Probabilistic Engineering Mechanics 22 (27) 136 149 www.elsevier.com/locate/probengmech Extreme value distributions for nonlinear transformations of vector Gaussian processes Sayan Gupta, P.H.A.J.M. van

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

RESPONSE SURFACE METHODS FOR STOCHASTIC STRUCTURAL OPTIMIZATION

RESPONSE SURFACE METHODS FOR STOCHASTIC STRUCTURAL OPTIMIZATION Meccanica dei Materiali e delle Strutture Vol. VI (2016), no.1, pp. 99-106 ISSN: 2035-679X Dipartimento di Ingegneria Civile, Ambientale, Aerospaziale, Dei Materiali DICAM RESPONSE SURFACE METHODS FOR

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

Assessment of Probabilistic Methods for Mistuned Bladed Disk Vibration

Assessment of Probabilistic Methods for Mistuned Bladed Disk Vibration 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics & Materials Conference 18-21 April 2005, Austin, Texas AIAA 2005-1990 Assessment of Probabilistic Methods for Mistuned Bladed Disk Vibration

More information

Overview of Structural Reliability Analysis Methods Part I: Local Reliability Methods

Overview of Structural Reliability Analysis Methods Part I: Local Reliability Methods Overview of Structural Reliability Analysis Methods Part I: Local Reliability Methods ChangWu HUANG, *, Abdelkhalak El Hami, Bouchaïb Radi Normandie Université, INSA Rouen, LOFIMS, 76000 Rouen, France.

More information

PRINCIPLES OF STATISTICAL INFERENCE

PRINCIPLES OF STATISTICAL INFERENCE Advanced Series on Statistical Science & Applied Probability PRINCIPLES OF STATISTICAL INFERENCE from a Neo-Fisherian Perspective Luigi Pace Department of Statistics University ofudine, Italy Alessandra

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Comparison of Virginia s College and Career Ready Mathematics Performance Expectations with the Common Core State Standards for Mathematics

Comparison of Virginia s College and Career Ready Mathematics Performance Expectations with the Common Core State Standards for Mathematics Comparison of Virginia s College and Career Ready Mathematics Performance Expectations with the Common Core State Standards for Mathematics February 17, 2010 1 Number and Quantity The Real Number System

More information

Reliability of uncertain dynamical systems with multiple design points

Reliability of uncertain dynamical systems with multiple design points Structural Safety 21 (1999) 113±133 www.elsevier.nl/locate/strusafe Reliability of uncertain dynamical systems with multiple design points S.K. Au, C. Papadimitriou, J.L. Beck* Division of Engineering

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Uncertainty modelling using software FReET

Uncertainty modelling using software FReET Uncertainty modelling using software FReET D. Novak, M. Vorechovsky, R. Rusina Brno University of Technology Brno, Czech Republic 1/30 Outline Introduction Methods and main features Software FReET Selected

More information

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

WA State Common Core Standards - Mathematics

WA State Common Core Standards - Mathematics Number & Quantity The Real Number System Extend the properties of exponents to rational exponents. 1. Explain how the definition of the meaning of rational exponents follows from extending the properties

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle AMSI Optimise Melbourne, Australia 18 Jun 2018 Decisions in the Face of Uncertain Outcomes = especially

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Probability and statistics; Rehearsal for pattern recognition

Probability and statistics; Rehearsal for pattern recognition Probability and statistics; Rehearsal for pattern recognition Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských

More information

Use of Simulation in Structural Reliability

Use of Simulation in Structural Reliability Structures 008: Crossing Borders 008 ASCE Use of Simulation in Structural Reliability Author: abio Biondini, Department of Structural Engineering, Politecnico di Milano, P.za L. Da Vinci 3, 033 Milan,

More information

A univariate approximation at most probable point for higher-order reliability analysis

A univariate approximation at most probable point for higher-order reliability analysis International Journal of Solids and Structures 43 (2006) 2820 2839 www.elsevier.com/locate/ijsolstr A univariate approximation at most probable point for higher-order reliability analysis S. Rahman *,

More information

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation

Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation Time-varying failure rate for system reliability analysis in large-scale railway risk assessment simulation H. Zhang, E. Cutright & T. Giras Center of Rail Safety-Critical Excellence, University of Virginia,

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

A Moving Kriging Interpolation Response Surface Method for Structural Reliability Analysis

A Moving Kriging Interpolation Response Surface Method for Structural Reliability Analysis Copyright 2013 Tech Science Press CMES, vol.93, no.6, pp.469-488, 2013 A Moving Kriging Interpolation Response Surface Method for Structural Reliability Analysis W. Zhao 1,2, J.K. Liu 3, X.Y. Li 2, Q.W.

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

DESK Secondary Math II

DESK Secondary Math II Mathematical Practices The Standards for Mathematical Practice in Secondary Mathematics I describe mathematical habits of mind that teachers should seek to develop in their students. Students become mathematically

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Part III. Hypothesis Testing. III.1. Log-rank Test for Right-censored Failure Time Data

Part III. Hypothesis Testing. III.1. Log-rank Test for Right-censored Failure Time Data 1 Part III. Hypothesis Testing III.1. Log-rank Test for Right-censored Failure Time Data Consider a survival study consisting of n independent subjects from p different populations with survival functions

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Statistical Estimation

Statistical Estimation Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from

More information

Reliability-based calibration of design seismic response spectra and structural acceptance criteria

Reliability-based calibration of design seismic response spectra and structural acceptance criteria Reliability-based calibration of design seismic response spectra and structural acceptance criteria C. Loth & J. W. Baker Department of Civil and Environmental Engineering Stanford University ABSTRACT:

More information

A COMPARISON OF WIND TURBINE DESIGN LOADS IN DIFFERENT ENVIRONMENTS USING INVERSE RELIABILITY

A COMPARISON OF WIND TURBINE DESIGN LOADS IN DIFFERENT ENVIRONMENTS USING INVERSE RELIABILITY AIAA--5 A COMPARISON OF WIND TURBINE DESIGN LOADS IN DIFFERENT ENVIRONMENTS USING INVERSE RELIABILITY Korn Saranyasoontorn Lance Manuel Department of Civil Engineering, University of Texas at Austin, Austin,

More information

Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016

Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,

More information

Reliability Theory of Dynamic Loaded Structures (cont.) Calculation of Out-Crossing Frequencies Approximations to the Failure Probability.

Reliability Theory of Dynamic Loaded Structures (cont.) Calculation of Out-Crossing Frequencies Approximations to the Failure Probability. Outline of Reliability Theory of Dynamic Loaded Structures (cont.) Calculation of Out-Crossing Frequencies Approximations to the Failure Probability. Poisson Approximation. Upper Bound Solution. Approximation

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Probabilistic analysis of off-center cracks in cylindrical structures

Probabilistic analysis of off-center cracks in cylindrical structures International Journal of Pressure Vessels and Piping 77 (2000) 3 16 www.elsevier.com/locate/ijpvp Probabilistic analysis of off-center cracks in cylindrical structures S. Rahman*, G. Chen a, R. Firmature

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Monte Carlo Simulation for Reliability Analysis of Emergency and Standby Power Systems

Monte Carlo Simulation for Reliability Analysis of Emergency and Standby Power Systems Monte Carlo Simulation for Reliability Analysis of Emergency and Standby Power Systems Chanan Singh, Fellow, IEEE Joydeep Mitra, Student Member, IEEE Department of Electrical Engineering Texas A & M University

More information

Lecture 10. Failure Probabilities and Safety Indexes

Lecture 10. Failure Probabilities and Safety Indexes Lecture 10. Failure Probabilities and Safety Indexes Igor Rychlik Chalmers Department of Mathematical Sciences Probability, Statistics and Risk, MVE300 Chalmers May 2013 Safety analysis - General setup:

More information

Algebra 1 Mathematics: to Hoover City Schools

Algebra 1 Mathematics: to Hoover City Schools Jump to Scope and Sequence Map Units of Study Correlation of Standards Special Notes Scope and Sequence Map Conceptual Categories, Domains, Content Clusters, & Standard Numbers NUMBER AND QUANTITY (N)

More information

Cross entropy-based importance sampling using Gaussian densities revisited

Cross entropy-based importance sampling using Gaussian densities revisited Cross entropy-based importance sampling using Gaussian densities revisited Sebastian Geyer a,, Iason Papaioannou a, Daniel Straub a a Engineering Ris Analysis Group, Technische Universität München, Arcisstraße

More information

GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS

GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS REVSTAT Statistical Journal Volume 14, Number 1, February 2016, 1 28 GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS Author: Yuri Salazar Flores Centre for Financial Risk, Macquarie University,

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

Reliability Based Design Optimization of Systems with. Dynamic Failure Probabilities of Components. Arun Bala Subramaniyan

Reliability Based Design Optimization of Systems with. Dynamic Failure Probabilities of Components. Arun Bala Subramaniyan Reliability Based Design Optimization of Systems with Dynamic Failure Probabilities of Components by Arun Bala Subramaniyan A Thesis Presented in Partial Fulfillment of the Requirements for the Degree

More information