In [3] is was shown that under some regularity conditions the derivative F'( r) is given by

Similar documents
A general procedure for rst/second-order reliability method (FORM/SORM)

Tangent spaces, normals and extrema

Guideline for Offshore Structural Reliability Analysis - General 3. RELIABILITY ANALYSIS 38

Implicit Functions, Curves and Surfaces

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

component risk analysis

OR MSc Maths Revision Course

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Linear Algebra Review. Vectors

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Functions of Several Variables

Quantum Computing Lecture 2. Review of Linear Algebra

1 Introduction. Green s function notes 2018

Course Summary Math 211

Review of Linear Algebra

7. Symmetric Matrices and Quadratic Forms

MATH 205C: STATIONARY PHASE LEMMA

Vector Derivatives and the Gradient

Algebra II. Paulius Drungilas and Jonas Jankauskas

Linear Algebra: Matrix Eigenvalue Problems

MATH 332: Vector Analysis Summer 2005 Homework

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Tensors, and differential forms - Lecture 2

Linear Algebra 2 Spectral Notes

5.6. PSEUDOINVERSES 101. A H w.

Homework 2. Solutions T =

Sufficient Conditions for Finite-variable Constrained Minimization

Neural Network Training

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Transpose & Dot Product

0.1 Tangent Spaces and Lagrange Multipliers

Quadratic Programming

Introduction to Signal Spaces

Review of Linear Algebra

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis.

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

Multivariable Calculus Notes. Faraad Armwood. Fall: Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates

Upon successful completion of MATH 220, the student will be able to:

Transpose & Dot Product

Solutions to Review Problems for Chapter 6 ( ), 7.1

1 Differentiable manifolds and smooth maps. (Solutions)

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Matrix Factorization and Analysis

Math (P)refresher Lecture 8: Unconstrained Optimization

Review problems for MA 54, Fall 2004.

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

1 Differentiable manifolds and smooth maps. (Solutions)

SYLLABUS. 1 Linear maps and matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

CITY UNIVERSITY LONDON

Math 302 Outcome Statements Winter 2013

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

INVARIANCE OF THE LAPLACE OPERATOR.

FFTs in Graphics and Vision. The Laplace Operator

Engg. Math. I. Unit-I. Differential Calculus

Linear Ordinary Differential Equations

Math 54. Selected Solutions for Week 5

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Reduction of Random Variables in Structural Reliability Analysis

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Chapter 3 Transformations

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Lecture 1: Systems of linear equations and their solutions

9.1 Mean and Gaussian Curvatures of Surfaces

MTH 2032 SemesterII

B553 Lecture 5: Matrix Algebra Review

COMP 558 lecture 18 Nov. 15, 2010

Higher order derivative

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Lecture 7. Econ August 18

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

( sin(t), cos(t), 1) dt =

Optimality Conditions

Math113: Linear Algebra. Beifang Chen

Lecture 3: QR-Factorization

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Fundamentals of Unconstrained Optimization

Conditional Gradient (Frank-Wolfe) Method

Mathematical Economics. Lecture Notes (in extracts)

Multivariable Calculus

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Linear Models Review

Tensors - Lecture 4. cos(β) sin(β) sin(β) cos(β) 0

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra Review

Exercise Sheet 1.

Lecture 6 Positive Definite Matrices

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

EE731 Lecture Notes: Matrix Computations for Signal Processing

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

Iterative methods for Linear System

Transcription:

8 Parameter-dependent integrals: some mathematical tools K. Breitung a a Department of Civil Engineering, University of Calgary 2500 University Drive N.W. Calgary, Alberta, Canada, T2N 1N4 1. INTRODUCTION In many reliability problems integrals of the following form F(T)= j f(z,t)dz g(z,t)$0 (1) are of interest. Here f(z,r) is usually a probability density aud g(z,r) a limit state function. Both functions depend on a parameter vector T. Then F(T) denotes the failure probability for the parameter value T. In the case that only the integrand depends on the parameter T the derivatives are obtained easily by differentiating under the integral sign. But the case that the limit state function depends on a parameter is of importance in reliability problems, especially in optimization. An example of such a problem is given in [7]. The concepts of asymptotic approximation methods for such integrals are outlined in [5]. 2. DERIVATIVE OF F(r) In [3] is was shown that under some regularity conditions the derivative F'( r) is given by (2) Here D(r) = {y;g(y,r) < 0}, G(r) = {y;g(y,r) = O} and ds.,.(y) denotes surface integration over G( T ). A disadvantage of this relation is that the second term is a surface integral, which is in general difficult to compute. Using the divergence theorem, we can transform it into a domain integral. The divergence theorem states that for a continuously differentiable vector function u(z) defined on a domain F with boundary G we have for the derivative R. Rackwitz et al. (eds.), Reliability and Optimization of Structural Systems Springer Science+Business Media Dordrecht 1995

Parameter-dependent integrals 105 then the following form j div(u(:v)) d:v = j(n(y),u(y)) d8(y). (3) F G Here div (u(:v)) = L:i= 1 ou;(:v)fox;, n(y) is the outward pointing normal at y and (:v, y) is the scalar product of :v and y. First we assume that the gradient Vy(y,r) does not vanish throughout D(r). If we define now a vector field u( :v) by ( ) _ f(:v,r)gt(:v,r)v ( ) u:v- IV:vg(:v,r)l2 ;vg:c,r (4) we get on the surface G( T) with outward pointing normal n(y) = IV yg(y, r)i- 1 V yg(y, r) for the scalar product (n(y), u(y )) the form ( ( ) ( )) = f(:v, r)gt(:v, r) ny,uy IV:vg(:v,r)l, (5) and therefore we have that J f( ) gt(y, r) ( ) j. (f(:v, r)gt(:v, r) ( )) y, T IV ( r)l dst y = djv IV (:v r)l2 V;vg :v, T d:v. G(T) yg y, F ;vg ' (6) This gives then for the derivative F'( r) the form '( ) j [ ( ). (f(:v, r)gt(:v, r) )] F T = ft :v,r -d1v IV:vg(:v,r)l2 V:vg(:v,r) d:v. g(:v,t)$0 If now the gradient vanishes at a finite number of points in D( T ), but the Hessian H 9 (:v) = (gii(:v,r))i,i=j,...,n is regular at all these points, equation (7) remains valid. We show this for the case of a single point :v* in D( T) with vanishing gradient. We consider a domain D*(r) given by D*(r) = D(r) \ K(t), where K(t) is the sphere around :v* with radius L The value of t is chosen so small that K(t) C D(r). Then the divergence theorem is valid for this domain. The boundary of it are the surface G( T) and the surface S(c) of the sphere K(t). So we obtain, since the outward pointing normal ou S(c) is l:v*- yl- 1 (:v*- y), the following equation (7) J f( ) gt(y,r) d ( ) j. (f(:v,r)gt(:v,r) ( )) d y,r IVyg(y,r)l 8T y = djv IV:vg(:v,r)l2 V;vg :v,r :v G(T) F - j f(y,r) gt(y,r) (Vyg(y,r)' :v*-y) ds,(y) IVyg(y,r)l Vyg(y,r) l:v*-yl S(<) (8) =l(<) with d8,(y) denoting surface integration over S(t).

106 Part Two Technical Contributions For the last surface integral I(t) we get with /{ = maxyek(<) lf(y,r)g,.(y,r)l the upper bound J g,.(y,r) 'llyg(y,r) ~ -y j _ 1 f(y,r)iv ( r)i(v ( r)'-1~ -l)ds,(y) :S:I< IVyg(y,r)l ds,(y). S(<) yg y, yg y, y S(<) Making a Taylor expansion of the first derivatives of g at ~ gives 'llyg(y, r) = H 9 (~*)(y- ~ ) + o(t). (9) With J-Lo = min IJ-LII,..., IJ-Lnl >O, where the /t;'s are the eigenvalues of H9 (~*) we get for the norm l'llyg(y, r)l 2': J-LoiY- ~ 1 + o(t) = /lof + o(t). (10) This gives then for the integral I ( t) that J 21fn/2 II(t)l ::S: KJ-LoE- 1 ds,(y) = KJ-Lor(n/ 2 ) fn- 2 + o(tn- 2 ) = O(tn- 2 ). S(<) (11) Therefore equation (7) remains valid if the Hessian is regular and the dimension of the integration domain is larger than two. 3. QUADRATIC FORMS ON SUBSPACES In a number of problems the definiteness of a matrix under linear constraints is of interest. Given is an n x n matrix H aud a subspace U spanned by m linearly independent vectors a 1,.., am. To find if the matrix is positive (or negative) on the subspace orthogonal to u. We consider first the case that a; = e;, i.e. the vectors a; are the first m unit vectors and that H is a diagonal matrix with diagonal elements /ll,..., Jln Then we ha ve for the quadratic form ~T H ~ the representation n m n ~T H ~ = L Jl;X~ = L JljXJ + L ţtjx;. (12) i=l j=l j=k+ 1 This quadratic form is positive definite under the constraint x 1 =... = Xk = O if the last n - m diagonal elements /lm+l,..., /ln are positive. We consider now the quadratic form defined by k n ~TH ~ = "Lx] + L Jlix]. (13) j=l j=k+l with H* a diagonal matrix with diagonal elements 1,..., 1, Jlm+l,..., Jln This quadratic form is positive definite if the quadratic form ~T H ~ is positive definite under the constraint x 1 =... = Xk = O. The projection matrix P of projection onto the subspace spanned by e 1,..., e". is given by the diagonal matrix with the first m elements equal to

Parameter-dependent integrals 107 one and the rest equal to zero and the projection onto the orthogonal subspace by In- P. Then we can the last equation in the form zth+z = zt (PP +(In- P)H(I,.- P))z. (14) In the general case of a subspace spanned by m arbitrary linearly independent vectors al!..., am, we can reduce the problem to the case above by making a rotation such the m dimensional subspace spanned by these vectors is transformed into the subspace spanned by the first m new coordinate vectors. The projection matrix onto that subspace is given by with A = (al!..., am) Then we get for the quadratic form zt H+z the same form as in the last equation. This gives finally: A matrix H is positive definite under the constraint AT:r: = Om iff the matrix H+ = PP + (I,.- P)H(I,.- P) is positive definite. Analogously we have that it is negative definite under these constraints iff H- = -P P +(In-P)H(In- P). If a matrix is positive definite, can be checked easily by making a Cholesky decomposition. If this algorithm does not break down, it is positive definite. In SORM we need the curvature correction factor for obtaining an asymptotic approximation for the failure probability. It is given in the form n-1 P(F) "'<~>( -/3) II (1 - f3";rl'2 (16) i=i Here the K;'s are the main curvatures of the limit surface at the beta point zo (see [2)). In [6) it is shown that the largest eigenvalue can be obtained during the final stage of a numerica! search for the beta point. The last result about quadratic forms on subspacse gives a simple method, which avoids an eigenvalue analysis for computing the main curvatures "1!..., Kn-h for calculating this curvature correction factor (Ili: 1 1(1-K;))- 1 1 2 By a rotation of the coordinates it can always be achieved that the Xn axis is in the direction of the normal vector of the surface G at :r:0 and the tangential space is spanned by the vectors in the directions of the x 1,..., x,._ 1- axes. The square of the curvature factor is then, (see [2], eq.(25)) (15) 912 ~o Vg(:llo)l 1 + 922(:1lo) jvg(:llo)l (17) jvg(:llo)l The 9ii(:r: 0 ) are the second derivatives of g at :Ilo with respect to x; and Xj. If we add a row and a column with zeros everywhere and only 1 in the main diagonal, the value of the determinant remains the same and we obtain 91 n-1 :Ilo) ) jvg(:llo 92,n!(:Ilo) jvg(:llo)l 1 + Dn-l,n-1 {:Ilo) jvg(:llo)l

108 Part Two Technical Contributions = det 1 + ou(:co) IVo(:Co)l 021 :Co IVg(:Co) Yn-11 :Co) jvg :Co) o 91,n,(:Co) IVg(:Co)l 92,n-d:Co) IVo(:Co)l o o o (18) If we set we get that the matrix D can be written in the form D = PTH(:co)P + ene~ with en = (0,..., O, l)t and P = 1 n - e,.eţ.. Due to the special choice of the coordinate system, en is the normal vector of the surface G at :c0 aud P is the projection matrix onto the tangential space of G at :c0. But this formulation is invariant under linear coordinate changes; we just have to replace e" by the normal vector n in the new coordinates n = IVg(:co)I-1Vg(:c0). So we have for an arbitrary coordinate system the following expression for the curvature factor n-1 IT (1- ~~:;) = det(d) = det(prh(:co)p + IVg(:co)I- 2 Vg(:co)(Vg(:co)n (21) i=l with P =In -IVg(:co)I- 2 Vg(:co)(Vg(:coW. Sin ce the function l:c 1 2 has a local minimum at the point :co under the constraint g( :c) = O the matrix H(:c0 ) is positive definite under the linear constraint nt :c = O, if the extremum is regular. Therefore in this case the determinant of D can be found from the Cholesky decomposition of PTH(z0 )P + IVg(zo)I-2Vg(:c0)(Vg(:c 0)f. In the same way we can obtain the asymptotic approximation in the case of non-normal random variables with p.d.f. f(:c). Here the asymptotic approximation is given by P(F) (2 )!n-1)/2 f( :c*) ~ 7r IVI(:c*)ll det(h*(:c*))l112 with l(:c) = ln(f(:c)) the log-likelihood function (see [4]). The matrix H*(:c*) is defined by H*(:c*)) = ATH(z*)A. Here H = (zii(:z:*)- IVl(:c*)IIVg(:c*)l-1yii(:c*));,;=1,...,n aud A = (a~,..., a,._t), where the a;'s form an orthonormal hasis of the tangential space of the li mit state surf ace at :c*. If the log-likelihood function has a regular maximum with respect to the failure domain at z*, then we can compute det(h*(:c*)) again as det(h*(:c*)) = det (PH(:c*)P- n(:c*)n(:c*l). Here P = 1,.- n(:c*)n(:c*f. (19) (20) (22) (23) (24)

Parameter-dependent integrals 109 3. PARAMETER DEPENDENCE OF THE PML-POINT We consider the problem that the reliability problem is a function of a parameter T. Then the point of maxima) likelihood (PML) depends on this parameter value. The change of the beta value inform theory under parameter changes was treated in [1]. We consider an integral as in equation (1), but with only a scalar parameter T. Following from the results of the last paragraph the failure probability is approximated making a Taylor expansion of the log-likelihood function around the PML and using the Laplace method. We asssume that for all feasible values of T there is exactly one PML :c = :c*(r) on the limit state surface G(r). The gradients of these functions with respect to the first n variables x1,, Xn are denoted by V:cf (resp. V:c9) and the partial derivative with respect tot by f-r (resp. 9-r) The Hessian of a function f(:c,r) with respect to the first n variables is written as HJ. We assume that for a fixed value of T there is a unique PML :c*( T) and we write in a shorthand notation :c. The vector of the first derivatives of :c with respect to T is written as :c;. This point is a stationary point of the Lagrangian function L( :c, \ T) defined by L = f- A9. Therefore for the point :c*( T) the following equation system 9 o. must be fulfilled. To find the derivatives of the coordinates of the PML with respect to par am eter changes, we differentiate this system with respect to T and set all derivatives equal to zero. This gives then H1:c; + V:cf-r- A-r V:c9- A(H 9:c; + V:cg-r) (V :cg, :c~) + 9-r Rearranging the terms gives (HJ- AH9 ):c~ + :T (V:cf- AV:cg) (V :cg, :c~) Since always V :cf - A V :cg = On, we get deleting this term (HJ- AH 9 ):c; (V :c9 ), :c~) This gives then for the vector :c; the form :c~ = A-r(H1 - AH 9t 1 V:c9 To determine the value of A-r we compute the scalar product of :c; and V :cg, yielding o. (25) (26) (27) (28) (29) (30) (31)

110 Part Two Technical Contributions From equation (29) follows then that the lefthand side is equal to -g" aud so we get This gives then for the derivative A-r then lnserted into equation (30) we obtain :v; = (V';vg)T(HJ ~ AHg)-1\l;vg(HJ + AHgtiV';vg. (34) The change of the value f* = f(z*(t),t) is given by Due to the Lagrange multiplier theorem, we can replace the gradient of f by the gradient of g, giving From equation (29) we get then If f-r = O, the Lagrange multiplier A gives the change of the value of f* relative to the negative change of g with respect tot. If the constraint is given in the form g(z)- T =O aud the p.d.f. depends not ou T, we obtain the simple form (32) (33) (35) (36) (37) J; =A. (38) If we consider the approximation for the failure probability given in equation (22) aud we neglect the change of the quantities in the denominator we get approximately (39) References [1] P. Bjerager aud S. Krenk. Parametric sensitivity in first order reliability theory. Journal of the Engineering Mechanics Division ASCE, 115:1577-1582, 1989. [2] K. Breitung. Asymptotic approximations for multinormal integrals. Journal of the Engineering Mechanics Division ASCE, 110(3):357-366, 1984. [3] K. Breitung. Parameter sensitivity of failure probabilities. In A. Der-Kiureghian aud P. Thoft-Christensen, editors, Reliability and Optimization of Structural Systems '90, Proceedings of the Srd IFIP WG 7.5 Conferencc, Berkeley, California, pages 43-51, New York, 1991. Springer. Lecture Notes in Engineering 61.

Parameter-dependent integrals 111 [4] K. Breitung. Probability approximations by log likelihood maximization. Journal of the Engineering Mcchanics Division ASCE, 117(3):457-477, 1991. (5] K. Breitung. Asymptotic Approximations for Probability lntegrals. Springer, Berlin, 1994. Lecture Notes in Mathematics, Nr. 1592. [6] A. Der Kiureghian aud M. De Stefano. Efficient algorithm for second-order reliability analysis. Journal of the Enginecring Mechanics Division ASCE, 117(12):2904-2923, 1991. (7] A. Der Kiureghian, Y. Zhung, and Ch.-Ch. Li. Inverse reliability problem. Journal of the Engineering Mechanics Division ASCE, 120(5):1154-1159, 1994.