component risk analysis

Size: px
Start display at page:

Download "component risk analysis"

Transcription

1 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability

2 outline risk analysis for components uncertain demand and uncertain capacity; multivariate normal distribution: definition; properties; transformation to standard normal space; design point; multivariate log normal distribution; First Order Reliability Method; sensitivity analysis. 273: Urban Systems Modeling Lec. 3 component reliability 2

3 example of components, general framework A structure, say a bridge. A road segment. An electrical component. A pump in a water system. A component is modeled by a set of random variables, describing loads, demands, capacity, resistance, features affecting the behavior. These variables are modeled by a joint distribution: The functioning of the component is described by a binary variable: the safe, of functioning state, and the failure. Task: computing the probability of failure, Pfailure 9% 273: Urban Systems Modeling Lec. 3 component reliability 3

4 PART I approaches to component risk analysis 273: Urban Systems Modeling Lec. 3 component reliability 4

5 reliability with uncertain load and resistance: method I, the load: ;,, the resistance: independence: s ;, p(s), p(r) r, s find safe condition limit state function: failure p(s) p(r) P var var var always true The difference between two normal rv.s is a normal rv. [to be proved later] ;, Φ ;, standard normal cdf Φ think of as the residual capacity P if resistance R is known Φ 273: Urban Systems Modeling Lec. 3 component reliability

6 reliability with uncertain load and resistance: method II 2 joint prob. in vector notation exp p(g).2 exp 2.3.2, g P with correlated random variables:, x 2 = r 2 4 x = s 6 273: Urban Systems Modeling Lec. 3 component reliability 6

7 example of reliability problem : multivariate normal variable [defined later] vector notation: KN 2.KN.KN 3% KN KN 2 2.KN reliability index [defined later]: probability of damage: Φ.89% 273: Urban Systems Modeling Lec. 3 component reliability 7

8 example of reliability, changing correlation : multivariate normal variable [defined later] varying the correlation coefficient: ; 2 2 = = -.3 sr x 2 = r x 2 = r P f.4.2, 2, = 2 2 = x 2 = r x 2 = r sr, 2 x = s, 2 x = s 273: Urban Systems Modeling Lec. 3 component reliability 8

9 reliability with uncertain load and resistance: method III p(x,x 2 ).3.2. x 2 = r 2 4 x = s 6 x 2 = r ,, safe domain failure domain, x = s p(x,x 2 ), g(x,x2) ,, classical reliability problem: solve an integral., x 2 = r 2 4 x = s 6 273: Urban Systems Modeling Lec. 3 component reliability 9

10 reliability with uncertain load and resistance: method III 7,, 7,, 6 6 u 2 x 2 = r 4 3 x 2 = r 4 3 u 2 x 2 2 x 2 x x = s, x transformation: orthogonal to,, x = s transformation to standard normal space: then it is easy standard normal var.: 273: Urban Systems Modeling Lec. 3 component reliability

11 reliability with uncertain load and resistance: method IV x 2 = r,, x 2 x x = s,,, conditional cumulative distribution s prob. that resistance is lower than load, for load equal to. 273: Urban Systems Modeling Lec. 3 component reliability

12 reliability with uncertain load and resistance: method IV p(s), p(r) F(s), F(r) p(s) p(r) r, s 2 r, s,, prob. that load is higher than resistance, for resistance equal to. alternative formulation: s, conditional cumulative distribution prob. that resistance is lower than load, for load equal to. 273: Urban Systems Modeling Lec. 3 component reliability 2

13 towards multivariate normal distribution 2 exp 2 2 exp 2 2 exp 2 2 2, number of dimension exp 2 determinant parameters ;, multivariate normal distribution 273: Urban Systems Modeling Lec. 3 component reliability 3

14 PART II Gaussian model for component risk analysis 273: Urban Systems Modeling Lec. 3 component reliability 4

15 multivariate normal distribution pdf: ;, 2 exp 2 vector of random variables = 4, 2 = 6, = 2, 2 = 3, 2 =.4 mean vector covariance matrix : E : var, : p(x,x 2 ) x x.3 273: Urban Systems Modeling Lec. 3 component reliability

16 multivariate normal distribution in log. scale pdf: gradient: log ;, Hessian matrix: maximum: uniform curvature, always negative definite. So MVN is log concave. 2 p(x,x 2 ) = 4, 2 = 6, = 2, 2 = 3, 2 =.4 Simplest case: the standard MVN: log ;, 2 2 x x 273: Urban Systems Modeling Lec. 3 component reliability 6

17 multivariate normal distribution: contour plot pdf: log ;, 2 contour line: const. = 4, 2 = 6, = 2, 2 = 3, 2 = lines are ellipses centered in the mean = 4, 2 = 6, = 2, 2 = 3, 2 =.4 x 2 = 4, 2 = 6, = 2, 2 = 3, 2 = -.9 x x examples, changing the correlation only x x - - x 273: Urban Systems Modeling Lec. 3 component reliability 7

18 multivariate normal distribution: eigenvalues pdf: log ;, 2 contour line: const. eigen value decomposition: = 4, 2 = 6, = 2, 2 = 3, 2 =.4 : eigenvector matrix Eigenvectors form an ortho normal base: i. e. : eigenvalue matrix eigen problem: x x 273: Urban Systems Modeling Lec. 3 component reliability 8

19 multivariate normal distribution: eigenvalues pdf: log ;, 2 contour line: const. eigen value decomposition: = 4, 2 = 6, = 2, 2 = 3, 2 =.4 principal components : / / x 2 in terms of variables, the contour line (surfaces) are circles (spheres): is standard MVN - - x 273: Urban Systems Modeling Lec. 3 component reliability 9

20 multivariate normal distribution: eigenvalues principal components: / inverse relation: original rand. var.s as a function of the components: = 4, 2 = 6, = 2, 2 = 3, 2 =.4 / basic idea of eigen values: to change point of view: from canonical base to an ortho normal base centered in the mean. Now variables looks uncorrelated. Re scale using : now variable has also unit variance. x 2 Matlab: [m_v,m_l]=eig(m_sigma), eig - - x 273: Urban Systems Modeling Lec. 3 component reliability 2

21 example of: eigen value decomposition you may check that: covariance matrix: eigenvector matrix: = 4, 2 = 6, = 2, 2 = 3, 2 = eigenvalue matrix: Λ x 2 Length of ellipse s principal axes: x 273: Urban Systems Modeling Lec. 3 component reliability 2

22 covariance matrix properties: symmetry:, positive definitiveness: : :correlation matrix = 4, 2 = 6, = 2, 2 = 3, 2 = = 4, 2 = 6, = 2, 2 = 3, 2 =.6 = 4, 2 = 6, = 2, 2 = 3, 2 = -.9 x 2 x 2 x x x x 273: Urban Systems Modeling Lec. 3 component reliability 22

23 properties of MN: marginalization ;, parameters 2 exp 2 \ ;, rand. vars. ;, marginal is normal = 99% = % same, same marginal = -3% same marginal 273: Urban Systems Modeling Lec. 3 component reliability 23

24 properties of MN: marginalization [cont.] ;, 2 exp 2 partition: marginal probability: ;, Marginalization may be computationally expensive in general. But if a vector of rand. vars. is jointly normal, any subset is jointly normal as well, and parameters can be directly read in those of the joint set. 273: Urban Systems Modeling Lec. 3 component reliability 24

25 properties of MN: conditional ;, 2 exp 2 After observing, the conditional distribution of is still normal: ;, the reduction of variance does not depend on the value observed if uncorrelated, : For jointly normal rand. vars., uncorrelation and independence are equivalent. 273: Urban Systems Modeling Lec. 3 component reliability 2

26 properties of MN: conditional [cont.] ;, 2 p(x 2 x ) p(x,x 2 ) p(x ) x 2. p(x =.3,x 2 ) p(x,x 2 =.4).2.4 x.6 p(x 2 ).8 p(x x 2 ) x 2. x 2 x x.. 273: Urban Systems Modeling Lec. 3 component reliability 26

27 example of marginalization/conditional ;, marginalization vector of random variables mean vector 6 4 ;, covariance matrix conditional suppose to observe reduction of uncertainty ;, : Urban Systems Modeling Lec. 3 component reliability 27

28 recap: transformation of random variables in d p z (z) F z (z) z = f(x) x x = f - (z) = g(z) random variable, transformation new random variable, z z : monotonically increasing inverse p z (z), F z (z) conservation of probability p x (x), F x (x) x p x (x) F x (x) 273: Urban Systems Modeling Lec. 3 component reliability 28

29 transformation of multivariate rand. vars. vector of rv.s joint probability,,, dim: inverse map: invertible map find:,,, : Jacobian Jacobian of the inverse map dim.: when determinant of the Jacobian general formula: 273: Urban Systems Modeling Lec. 3 component reliability 29

30 transformation of multivariate rand. vars. [cont.].8.2 x area( ) area( ) 2.2 y x area( ) 4% area( ).2.. y 2. e.g.: uniform 2... x.. x 2 equal probability (volume). y.. y : Urban Systems Modeling Lec. 3 component reliability 3

31 sign of the determinant of the Jacobian the map preserves orientation x 2 y 2 y we are only interested in the ratio between areas, hence we take the absolute value of the determinant. x de/22/8/26/32/ y 2 the map inverts orientation y 273: Urban Systems Modeling Lec. 3 component reliability 3

32 linear transformation of mult. rand. vars. linear transformation:.4 x x y y inverse transformation: for a linear transformation, the Jacobian (and consequently its determinant) is uniform. 273: Urban Systems Modeling Lec. 3 component reliability 32

33 example of linear transformations: x x pure rotation cos /3 sin /3 sin /3 cos /3 y y diagonal (no rotation)...7 y y general y y 273: Urban Systems Modeling Lec. 3 component reliability 33

34 linear transformation of jointly normal rand. vars. ;, 2 exp 2 linear invertible transformation: ;, exp 2 exp 2 ;, same as for mean vector and covariance matrix of every multivariate rand. var.s this proves that: A linear combination of jointly normal rand. var.s is also jointly normal. This is true in general, also for a transformation to a smaller space, e.g. from vector to scalar (proved by marginalization). 273: Urban Systems Modeling Lec. 3 component reliability 34

35 summary on jointly normal rand. vars. ;, the joint probability is completely defined by mean vector and covariance matrix, which are the parameters of the distribution. the conditional distribution, given any subset of variable, is also jointly normal. each subset of is jointly normally distributed, and marginalization is computationally trivial (just copy part of and ). note: if the marginal probability of each variable is normal, this does not imply that the set of variables is jointly normal. any linear transformation of the variables is jointly normal: ;, ADVANCED the variables can be easily mapped into the «standard normal space». 273: Urban Systems Modeling Lec. 3 component reliability 3

36 linear transformation of jointly normal rand. vars. ;, distribution of : ;, from linear transformation rule sum: 2 difference: 2 example: many loads, many resistances: limit state function resistances loads : Urban Systems Modeling Lec. 3 component reliability 36

37 sum of two random variables in the general case joint probability,,,,,, convolution integral, hopefully it can be solved for specific distributions,. if independency,, second moment representation: always true difference: 273: Urban Systems Modeling Lec. 3 component reliability 37

38 reliability for normal vars., with linear limit state func. linear limit state function: safe condition failure.3 P.2 distribution of : ;, p(x,x 2 ), g(x,x2). -. probability of failure : Φ reliability index : x 2 = r 2 4 x = s 6 273: Urban Systems Modeling Lec. 3 component reliability 38

39 why do we assume a MVN model? consider ~ [not necessarily ] does and exist (can be computed)? consider linearly related to : : safe condition so that failure p norm. appr. can we compute and var? why we need ~? Because we get ~ and we can easily compute P p(x) F(g).2.. =44% appr. =27% - g 273: Urban Systems Modeling Lec. 3 component reliability 39

40 PART III transformation of the Gaussian model 273: Urban Systems Modeling Lec. 3 component reliability 4

41 transformation to standard normal space Given and, find and so that: ;, d: ;, Cholesky decomposition: Given any matrix (positive definite), chol is a lower triangular matrix so that. standard normalization: Eigenvalue analysis: Given any matrix, is orthonormal matrix, is a diagonal matrix so that. not the same map. / Cholesky is simpler. / Matlab: m_l=chol(m_sigma,'lower') 273: Urban Systems Modeling Lec. 3 component reliability 4

42 example of transformation to standard normal space ;, inverse relation: Eigenvalue Cholesky Cholesky chol Eigenvalue / : Urban Systems Modeling Lec. 3 component reliability 42

43 p(u,u 2 ) density in the standard normal space ;, polar coord u 2 p(u,u 2 =) exp 2 p(u =,u 2 ) u 2 u u exp () ;, Maximum density in the origin, fast decay in radial direction. Radial symmetry: density only dependents on. ().3.2. pdf cdf 273: Urban Systems Modeling Lec. 3 component reliability 43

44 rotation in the standard normal space ;, polar coord. 2 new reference system: for an ortho normal system: new coordinates: distribution in new coordinates exp 2 u exp 2 ;, 273: Urban Systems Modeling Lec. 3 component reliability u ;, ;, () () pdf cdf 2 3 the distribution is invariant respect to rotation 44

45 properties of the standard normal space ;, polar coord. 2 exp 2 2 exp 2 ;, 2 for each number of dimensions, there is just one standard normal space; the distribution is invariant respect to rotation; the origin,, is the mean vector and it is the (only) mode (i.e. maximum); each variable is scaled to (zero mean and) unit standard deviation; each variable is independent from the others; all marginal distributions are the same:, (standard normal); the density at one point () depends only by the distance from the origin () and the number of dimensions (). 273: Urban Systems Modeling Lec. 3 component reliability 4

46 reliability in the standard normal space transformation from standard normal to physical space: limit state function in the standard normal space linear limit state functions stay linear: as expected, reliability in standard normal space and in the physical space are equivalent: probability of failure : 273: Urban Systems Modeling Lec. 3 component reliability 46

47 design point in the standard normal space design point design point: the most dangerous condition: arg max in standard normal space: Failure Domain exp 2 log 2 arg min the design point is the point in the failure domain closest to the origin. If : arg min origin in the safe domain, i.e. low probability of failure. design point: it belongs to the failure domain (it is on the edge safe/failure); it has a high probability (the highest in the failure domain); it can be found by solving a constrained optimization problem; for linear limit state functions, the solution is very simple. 273: Urban Systems Modeling Lec. 3 component reliability 47

48 design point in the standard normal space [cont.] limit state function design point conditions to find design point vector of norm coordinates of the design point 273: Urban Systems Modeling Lec. 3 component reliability 48

49 reliability using design point, in stand. normal space design point coordinates of the design point reliability index Φ [check that this is consistent with previous result] once you have found the design point, you can measure how far the failure domain is from the origin, and compute the probability of failure. Summary: Transform your belief in the standard normal space, and define the new limit that function. Find the design point, measure how far is it from the origin to get the reliability index. 273: Urban Systems Modeling Lec. 3 component reliability 49

50 design point in the physical space [normal variables] 4 3 standard normal space 2 =, 2 =, = 2, 2 =., 2 =.6 2 u 2 x 2 = r u arg max physical space 2 x = s arg max The design point is the most dangerous scenario: it is a (incipient) failure condition, and it is the scenario with highest probability in the failure domain. If the map is linear, the design point in the physical coordinates is. If it is not linear, previous equation can be only approximate. 273: Urban Systems Modeling Lec. 3 component reliability

51 reliability index it gives the order of magnitude of the probability of failure: Φ Φ example, problem: Φ Φ [actually, if the set is an hyper plane, and is regular, continuous] IF the limit state function is linear in standard normal space, than the reliability index is the distance between the origin and the design point. IF NOT, it is not necessary. ;, 2 P f : Urban Systems Modeling Lec. 3 component reliability %.6% % 3.3%

52 log normal multivariate distribution log scale ;, exp log linear scale ln;, Jacobian determinant of the Jacobian exp exp exp log normal density log ;, 2 exp 2 log log ln;, 273: Urban Systems Modeling Lec. 3 component reliability 2

53 moments to parameters for log norm. mult. distr. ln;, ;, parameters: log linear scale log scale moments of : relations: ln for small ln as for d ln for small ln for small 273: Urban Systems Modeling Lec. 3 component reliability 3

54 properties of log normal multivariate distribution =, 2 =., =., 2 =.8, xx2 =.6 =, 2 =., =., 2 =.8, xx2 =.6 p(z,z 2 ) z z z completely defined by mean vector and covariance matrix; marginal and conditional distributions of any subset of rand. vars. are lognormal; product functions are jointly lognormal; uncorrelation implies independence. z 273: Urban Systems Modeling Lec. 3 component reliability 4

55 r s problem with log normal rand. var.s load and resistance: ln;, consider the load to resistance ratio: safe condition if log failure if log log log log log limit state function: log log log safe condition failure in the logarithm scale, the formulation is equivalent to that with normal rand. vars. log ;, log log 273: Urban Systems Modeling Lec. 3 component reliability

56 reliability problem with log normal rand. var.s ln;, limit state function log vector of random variables log log we can re shape the problem as that of a linear limit state function on a jointly normal distributed variable: example: failure failure with, > log log ;, log : Urban Systems Modeling Lec. 3 component reliability 6

57 PART IV general approach and FORM 273: Urban Systems Modeling Lec. 3 component reliability 7

58 general reliability problem given 8 6 joint probability limit state function compute When many random variables are involved in the problem (high dimensional space), it is expensive to compute the integral. x 2 4 No analytical solutions are generally available x The integral can be solved numerically, by counting along a grid (but that method is ineffective because of the course of dimensionality). Approximate solutions are provided by reliability methods (FORM: first order reliability method). Or by simulations (Monte Carlo). For reliability methods and for simulations it is convenient to formulate the problem in the standard normal space. 273: Urban Systems Modeling Lec. 3 component reliability 8

59 general reliability problem in stand. norm. space given joint probability limit state function compute x 2 u x u find transformation to the standard normal space: : ;, 273: Urban Systems Modeling Lec. 3 component reliability 9

60 going to the standard normal space It can be easily done for any jointly normal distribution (e.g. using Cholesky). Also for any jointly log normal distribution (taking the log, and using Cholesky). It can also be done for any distribution (Rosenblatt transformation), but it may complicate. u why we prefer this space: once here, distribution is very simple u variables are uniform, in the same scale, uncorrelated. you can easily generate samples from the distribution, you can approximate the solution finding the design point (FORM) 273: Urban Systems Modeling Lec. 3 component reliability 6

61 First Order Reliability Method (FORM) find design point: in standard normal space design point arg max arg min compute approximate reliability index: Φ approximate limit state (linear approximation at the design point) 273: Urban Systems Modeling Lec. 3 component reliability 6

62 FORM in d p(u), g(u) standard normal var.: non linear limit state function: is in the safe domain design point: : approximation: Φ safe failure u find design point find zero Newton Raphson method start at approximate (Taylor) derivative (gradient in higher dim.) repeat until convergence 273: Urban Systems Modeling Lec. 3 component reliability 62

63 FORM in d [cont.] p(u), g(u) standard normal var.: non linear limit state function: is in the safe domain design point: : approximation: Φ safe failure u find design point find zero Newton Raphson method start at approximate (Taylor) derivative (gradient in higher dim.) repeat until convergence 273: Urban Systems Modeling Lec. 3 component reliability 63

64 FORM for more than variable probability limit state function ;, [generally non linear] [standard normal space] linear approximation around [Taylor] gradient design point: distant from the origin: 273: Urban Systems Modeling Lec. 3 component reliability 64

65 FORM iterative method select repeat, from compute until convergence at set Φ 273: Urban Systems Modeling Lec. 3 component reliability 6

66 example of FORM 2 2 g(u,u 2 ) - -2 g(u,u 2 ) u u -3 u u 273: Urban Systems Modeling Lec. 3 component reliability 66

67 example of FORM 2 2 g(u,u 2 ) - -2 g(u,u 2 ) u u -3 u u 273: Urban Systems Modeling Lec. 3 component reliability 67

68 example of FORM u # iter Monte Carlo: 4. % u 273: Urban Systems Modeling Lec. 3 component reliability 68

69 gradient following a transformation invertible map : inverse map: limit state function proof Suppose we have the Jacobian, and the gradient, we can compute the gradient as chain rule, multiplying Jacobian and gradient. 273: Urban Systems Modeling Lec. 3 component reliability 69

70 Jacobian of a composed map invertible maps : : proof Suppose we have the Jacobian, and that for, we can compute the Jacobian as chain rule, multiplying the two matrices. 273: Urban Systems Modeling Lec. 3 component reliability 7

71 example of reliability problem by FORM ln;, failure normal space exp ;,.4.2 7% 7% 2% limit state function 3 exp exp exp exp exp standard normal space ;, 273: Urban Systems Modeling Lec. 3 component reliability 7

72 example of reliability problem by FORM limit state function is not linear in the standard normal space (from integration and Monte Carlo).97% 2 design point u g(u,u2) u - u u2 273: Urban Systems Modeling Lec. 3 component reliability 72

73 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u.64 % 9 273: Urban Systems Modeling Lec. 3 component reliability 73

74 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u : Urban Systems Modeling Lec. 3 component reliability 74

75 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u % 273: Urban Systems Modeling Lec. 3 component reliability 7

76 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u %.4% 273: Urban Systems Modeling Lec. 3 component reliability 76

77 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u %.48% 273: Urban Systems Modeling Lec. 3 component reliability 77

78 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = u - u u %.% 273: Urban Systems Modeling Lec. 3 component reliability 78

79 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = x u - u u %.2% 273: Urban Systems Modeling Lec. 3 component reliability 79

80 reliability problem by FORM: iterative scheme limit state function is not linear in the standard normal space.97% (from integration and Monte Carlo) g(u,u 2 ) 2 - u k = x u - u u %.2% compare with Monte Carlo result:.97% 273: Urban Systems Modeling Lec. 3 component reliability 8

81 FORM importance measures in the stand. norm. space importance measure: : is a load : is a capacity : irrelevant is a capacity is more important that linearized limit state function: design point: direction of the design point: 273: Urban Systems Modeling Lec. 3 component reliability 8

82 FORM importance measures in the stand. norm. space importance measure: : is a load : is a capacity : irrelevant is a capacity is more important that linearized limit state function: direction of the design point: stand. norm. space: ;, importance measure gives the importance of variable in the problem. 273: Urban Systems Modeling Lec. 3 component reliability 82

83 FORM importance measures in physical space linearized map linearized inverse map linearized limit state function constants stand. dev. of : contribution to the uncertainty (variance) of : we assume correlation matrix, because we are not interested in the correlation, to define the importance singular variables. normalized vector gives the importance measures of variables in. 273: Urban Systems Modeling Lec. 3 component reliability 83

84 example of importance measures in physical space design point: Standard normal space Normal space Physical space % exp exp exp 2 S % % 273: Urban Systems Modeling Lec. 3 component reliability 84

85 PART V further remarks on component reliability 273: Urban Systems Modeling Lec. 3 component reliability 8

86 general transformation to the standard normal space w p w (w) u x = F x (w) w = F x (x).4.2 x a) every rand. var. distributed by:, can be transformed into a uniform rand. var. by transformation. ;, x. p x (x) 273: Urban Systems Modeling Lec. 3 component reliability 86

87 w general transformation to the standard normal space a) every rand. var. distributed by:, can be transformed into a uniform rand. var. by transformation. ;, b) uniform rand. var. can be transformed into rand. var., distributed by,, through transformation u, x hence can be derived by through. In particular, can be mapped into the standard normal distribution by transformation Φ. p w (w) u u = - (w) w = (u) x = F x - (w) w = F x (x) x p u (u), p x (x) multivariate case: independent rand. vars. : Φ dependent rand. vars. Φ Φ Φ Rosenblatt transform. 273: Urban Systems Modeling Lec. 3 component reliability 87

88 properties of limit state function, to use FORM only the sign of the limit state function is relevant. Functions and, so that sign sign, are equivalent. the reliability does not depend on the slope of the gradient, or on the magnitude of : the reliability problem is defined by boundary: [and sign] a linear limit state function is convenient, because local data, at any point define the all boundary. p(u), g(u) u to use FORM (Newton method), we require function to be continuous and differentiable:. 273: Urban Systems Modeling Lec. 3 component reliability 88

89 convergence of Newton Rapshon method The method may not converge in some conditions: To overcome this problem, one may pose a maximum size in the steps taken by the algorithm:. see Wolfe Conditions and Armijo rule: onditions p(u), g(u) u 273: Urban Systems Modeling Lec. 3 component reliability 89

90 SORM Second Order Reliability Method: it approximates the limit state function with a quadratic form around the design point. 2 design point limit state function is approximated taking curvature into account Hessian matrix It is more accurate than FORM, but computationally more expensive because it requires to obtain curvatures ( ) at the design point. It is a further step after FORM: find the design point compute curvature at. 273: Urban Systems Modeling Lec. 3 component reliability 9

91 bounds for FORM and SORM By definition, the design point is the closest to the origin. Hence the all region belongs to the safe domain. Let us define. FORM: upper bound to the probability of failure: P χ Cumulative Chi squared distribution, with degrees of freedom. design point χ [no lower bound] P f FORM upper bound * 273: Urban Systems Modeling Lec. 3 component reliability 9

92 note about design point and reliability index definition of reliability index: Φ Φ distance of the design point from the origin: FORM approximation: design point arg max arg max design point for a non linear map, the design point in the stand. norm. space is not necessary mapped into the max. of the physical space in the failure domain. analogy: the mode of the (e.g. uni variate) normal distr. is not mapped into the mode of the log normal. 273: Urban Systems Modeling Lec. 3 component reliability 92

93 refereces on wikipedia: Cholesky decomposition Eigenvalues and eigenvectors Gradient Jacobian Positive definite matrix Multivariate normal distribution Newton's method Chi squared distribution Wolfe Condition Chain rule Barber, B. (22). Bayesian Reasoning and Machine Learning. Cambridge UP. Downloadable from Section 8.4 on Multivariate Gaussian. Der Kiureghian, A. (2) "First and Second Order Reliability Methods", in book: E. Nikolaidis, D.M. Ghiocel, S. Singhal (Eds), The Engineering design reliability handbook, CRC Press LLC. Ditlevsen, O. and H.O. Madsen. (996). Structural reliability methods. J. Wiley & Sons, New York, NY. Downloadable from HOM StrucRelMeth Ed2.3.7 June September.pdf. Sections 2. 3, 4. 2,. Faber, M. (29) Risk and Safety in Engineering, lecture notes, Lectures 6, available at Sørensen, J.D. (24) "Notes in Structural Reliability Theory And Risk Analysis", notes 3, avail. at eling_waterbouwkunde/sectie_waterbouwkunde/people/personal/gelder/publications/citatio ns/doc/citatie2.pdf 273: Urban Systems Modeling Lec. 3 component reliability 93

94 MW Matlab commands M=zeros(n,m) : it defines matrix M, of size (nm), with all entries zero. length(v) : number of entries in vector v. M=diag(v) : it makes diagonal matrix M, putting vector v on the diagonal. L=chol(M,'lower') : compute lower triangular matrix L, from Cholesky decomp. of M. M.*M2: when matrices (or vectors) M and M2 have the same dimension, it makes matrices M3 as element by element product of M and M2. Similar allowed operations are: M./M2,./M, M.^2. 273: Urban Systems Modeling Lec. 3 component reliability 94

risk assessment of systems

risk assessment of systems 12735: Urban Systems Modeling Lec. 05 risk assessment of instructor: Matteo Pozzi C 1 C 1 C 2 C 1 C 2 C 3 C 2 C 3 C 3 1 outline definition of system; classification and representation; two state ; cut

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling 12735: Urban Systems Modeling Lec. 09 Bayesian Networks instructor: Matteo Pozzi x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 outline example of applications how to shape a problem as a BN complexity of the inference

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Sensitivity and Reliability Analysis of Nonlinear Frame Structures

Sensitivity and Reliability Analysis of Nonlinear Frame Structures Sensitivity and Reliability Analysis of Nonlinear Frame Structures Michael H. Scott Associate Professor School of Civil and Construction Engineering Applied Mathematics and Computation Seminar April 8,

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

MACHINE LEARNING ADVANCED MACHINE LEARNING

MACHINE LEARNING ADVANCED MACHINE LEARNING MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete

More information

Elliptically Contoured Distributions

Elliptically Contoured Distributions Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal

More information

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Whitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain

Whitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain Whitening and Coloring Transformations for Multivariate Gaussian Data A Slecture for ECE 662 by Maliha Hossain Introduction This slecture discusses how to whiten data that is normally distributed. Data

More information

Fundamentals of Matrices

Fundamentals of Matrices Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix

More information

EFFICIENT MODELS FOR WIND TURBINE EXTREME LOADS USING INVERSE RELIABILITY

EFFICIENT MODELS FOR WIND TURBINE EXTREME LOADS USING INVERSE RELIABILITY Published in Proceedings of the L00 (Response of Structures to Extreme Loading) Conference, Toronto, August 00. EFFICIENT MODELS FOR WIND TURBINE ETREME LOADS USING INVERSE RELIABILITY K. Saranyasoontorn

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

CMPE 58K Bayesian Statistics and Machine Learning Lecture 5

CMPE 58K Bayesian Statistics and Machine Learning Lecture 5 CMPE 58K Bayesian Statistics and Machine Learning Lecture 5 Multivariate distributions: Gaussian, Bernoulli, Probability tables Department of Computer Engineering, Boğaziçi University, Istanbul, Turkey

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments Sequential Importance Sampling for Rare Event Estimation with Computer Experiments Brian Williams and Rick Picard LA-UR-12-22467 Statistical Sciences Group, Los Alamos National Laboratory Abstract Importance

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

The Multivariate Gaussian Distribution [DRAFT]

The Multivariate Gaussian Distribution [DRAFT] The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Guideline for Offshore Structural Reliability Analysis - General 3. RELIABILITY ANALYSIS 38

Guideline for Offshore Structural Reliability Analysis - General 3. RELIABILITY ANALYSIS 38 FEBRUARY 20, 1995 3. RELIABILITY ANALYSIS 38 3.1 General 38 3.1.1 Variables 38 3.1.2 Events 39 3.1.3 Event Probability 41 3.1.4 The Reliability Index 41 3.1.5 The Design Point 42 3.1.6 Transformation of

More information

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Robotics 2 Data Association Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Data Association Data association is the process of associating uncertain measurements to known tracks. Problem

More information

5. Discriminant analysis

5. Discriminant analysis 5. Discriminant analysis We continue from Bayes s rule presented in Section 3 on p. 85 (5.1) where c i is a class, x isap-dimensional vector (data case) and we use class conditional probability (density

More information

On the Fisher Bingham Distribution

On the Fisher Bingham Distribution On the Fisher Bingham Distribution BY A. Kume and S.G Walker Institute of Mathematics, Statistics and Actuarial Science, University of Kent Canterbury, CT2 7NF,UK A.Kume@kent.ac.uk and S.G.Walker@kent.ac.uk

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: CS 322 Final Exam Friday 18 May 2007 150 minutes Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: (A) Runge-Kutta 4/5 Method (B) Condition

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Created by Erik Kostandyan, v4 January 15, 2017

Created by Erik Kostandyan, v4 January 15, 2017 MATLAB Functions for the First, Second and Inverse First Order Reliability Methods Copyrighted by Erik Kostandyan, Contact: erik.kostandyan.reliability@gmail.com Contents Description... References:...

More information

8 - Continuous random vectors

8 - Continuous random vectors 8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Gaussian Models (9/9/13)

Gaussian Models (9/9/13) STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology

More information

1.6: 16, 20, 24, 27, 28

1.6: 16, 20, 24, 27, 28 .6: 6, 2, 24, 27, 28 6) If A is positive definite, then A is positive definite. The proof of the above statement can easily be shown for the following 2 2 matrix, a b A = b c If that matrix is positive

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Discriminant analysis and supervised classification

Discriminant analysis and supervised classification Discriminant analysis and supervised classification Angela Montanari 1 Linear discriminant analysis Linear discriminant analysis (LDA) also known as Fisher s linear discriminant analysis or as Canonical

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic - not

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. ADHIKARI AND R. S. LANGLEY Cambridge University Engineering Department Cambridge, U.K. Random Variable Reduction in Reliability Analysis

More information

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney,

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

CSE 554 Lecture 7: Alignment

CSE 554 Lecture 7: Alignment CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex

More information

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Haoyu Wang * and Nam H. Kim University of Florida, Gainesville, FL 32611 Yoon-Jun Kim Caterpillar Inc., Peoria, IL 61656

More information

Non-linear least squares

Non-linear least squares Non-linear least squares Concept of non-linear least squares We have extensively studied linear least squares or linear regression. We see that there is a unique regression line that can be determined

More information

A geometric proof of the spectral theorem for real symmetric matrices

A geometric proof of the spectral theorem for real symmetric matrices 0 0 0 A geometric proof of the spectral theorem for real symmetric matrices Robert Sachs Department of Mathematical Sciences George Mason University Fairfax, Virginia 22030 rsachs@gmu.edu January 6, 2011

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

Transformation of Probability Densities

Transformation of Probability Densities Transformation of Probability Densities This Wikibook shows how to transform the probability density of a continuous random variable in both the one-dimensional and multidimensional case. In other words,

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Multivariate Distribution Models

Multivariate Distribution Models Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is

More information

L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

More information

Unsupervised Learning: Dimensionality Reduction

Unsupervised Learning: Dimensionality Reduction Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Previously Monte Carlo Integration

Previously Monte Carlo Integration Previously Simulation, sampling Monte Carlo Simulations Inverse cdf method Rejection sampling Today: sampling cont., Bayesian inference via sampling Eigenvalues and Eigenvectors Markov processes, PageRank

More information

Random Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras

Random Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras Random Vibrations & Failure Analysis Sayan Gupta Indian Institute of Technology Madras Lecture 1: Introduction Course Objectives: The focus of this course is on gaining understanding on how to make an

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Link to Paper. The latest iteration can be found at:

Link to Paper. The latest iteration can be found at: Link to Paper Introduction The latest iteration can be found at: http://learneconometrics.com/pdf/gc2017/collin_gretl_170523.pdf BKW dignostics in GRETL: Interpretation and Performance Oklahoma State University

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Dimensionality Reduction and Principle Components

Dimensionality Reduction and Principle Components Dimensionality Reduction and Principle Components Ken Kreutz-Delgado (Nuno Vasconcelos) UCSD ECE Department Winter 2012 Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,...,

More information