Homogenization in probabilistic terms: the variational principle and some approximate solutions
|
|
- Emmeline Ray
- 5 years ago
- Views:
Transcription
1 Homogenization in probabilistic terms: the variational principle and some approximate solutions Victor L. Berdichevsky Mechanical Engineering, Wayne State University, Detroit MI 480 USA (Dated: October 9, 05)
2 Abstract Studying of materials with evolving random microstructures requires the knowledge of probabilistic characteristics of local elds because the path of the microstructure evolution is controlled by the local elds. The probabilistic characteristics of local elds are determined by the probabilistic characteristics of material properties. In this paper it is considered the problem of nding the probabilistic characteristic of local elds, if the probabilistic characteristics of material properties are given. The probabilistic characteristics of local elds are sought from the variational principle for probabilistic measure. Minimizers of this variational problem provide all statistical information of local elds as well as the e ective coe cients. Approximate solutions are obtained for electric current in composites for two cases: multi-phase isotropic composites with lognormal distribution of conductivities and two-phase isotropic composites. The solutions contain a lot of statistical information that has not been available previously by analytical treatments. Keywords: homogenization, composites, bounds, variational principles, Hashin-Shtrikman I. INTRODUCTION Numerical studies reveal the enormous complexity of local elds in composites with random microstructures (see, e.g., Escola et al. 0, Willot et al., 03, Kalidindi et al., 0). There is no doubt that the local elds can be described adequately only in probabilistic terms. Such description is needed not only for characterization of the state of a composite, but also for prediction of the evolution of microstructures due to plasticity, fatigue, or fracture, where the path of evolution is controlled by the local elds. At the moment, the probabilistic characteristics of local elds are sought by statistical analysis of a huge number of numerical simulations conducted for di erent realizations of microstructures. Apparently, a more practical way is desirable. In this paper we explore the possibility of using for such purposes the variational principle for probabilistic measure (Berdichevsky 987). The variational principle is based on a simple idea. Consider conductivity of composites. Conductivity may have di erent physical meaning (electric, heat, ltration, etc.), and for de niteness we will use the terminology of electrical conductivity of composites. Let the composite occupy some three-dimensional region V. Denote by a ij (x) electrical conductivi-
3 ties, x being points of three-dimensional space referred to a Cartesian frame x i ; small Latin indices run through values,,3. The electric potential ' (x) is prescribed at the of region V as a linear function of coordinates, ' (x) = v i x i () constants v i are the components of the external homogeneous electric eld, summation over repeated indices is always implied. Denote by (x) the electric potential caused by microinhomogeneities: ' (x) = v i x i + (x) in V; (x) = 0 The corresponding electric eld is denoted by u i ; u i =x i : In homogeneous conductors = u i = 0: As is known, the true electric potential minimizes the functional, jv j a ij (x) (v i + u i ) (v j + u i ) dv; () V on the set of all functions (x) vanishing In () a ij (x) are prescribed conductivities, and jv j is volume of region V: In theory of composites, the functional to be minimized has the meaning of either energy or dissipation. For electric conductors, () is dissipation per unit volume (up to the factor ). In general considerations we take liberty to call the functional to be minimized energy functional keeping in mind that in a particular physical problem its actual physical meaning could be dissipation. Due to linearity of the problem, the minimum value of the functional () is a quadratic function of parameters v i : The coe cients of this quadratic form a e ij e ective coe cients of the composite, ae ij v i v j = min jv j V have the meaning of a ij (x) (v i + u i ) (v j + u j ) dv: (3) If the characteristic length of micro-inhomogeneities, ", is much smaller than the size of region V, then a e ij are uctuating slightly when the microstructure changes. The following key observation giving rise to modern homogenization theory of random structures was made by S. Kozlov (978): under some physically non-constraining assumptions, a e ij take certain deterministic values as "! 0: In the limit problem the boundary condition = 0 can be 3
4 replaced by an integral condition, jv j V u i dv = 0: Another observation was that the problem is scale invariant (asymptotically), and, instead of tending " to zero, one can x " and tend V to in nity. Denote by h'i the space average of function ' (x) de ned in entire space, h'i = lim jv j! ' (x) dv: jv j V Then the statement formulated can be written as ae ij v i v j = min a ij (x) (v i + u i ) (v j + u j ) (4) where minimum is sought over all, such that hu i i = 0: (5) This is so-called Kozlov s cell problem for random structures. Kozlov justi ed (4) considering the two-scale expansion of the solution of the original problem (Kozlov 978, 979; see also Jikov, Kozlov, Oleinik, 994). Numerous works on the subject have been reviewed by Milton (00), Cherkaev (000), Torquato (00), Berdichevsky (009). Euler equations of variational problem (4) are partial di erential equations with random coe cients. Our goal is to nd the probabilistic characteristics of the solution. To this end, note that the functional to be minimized has the meaning of space average of some random elds. If these elds are stationary and ergodic, then space average can be replaced by mathematical expectation. Let f(a; u) be the joint one-point probability density of conductivities a ij and electric eld u i : Then the space average of dissipation is equal to its mathematical expectation a ij (x) (v i + u i ) (v j + u j ) = a ij (v i + u i ) (v j + u j ) f (a; u) dadu: In the right hand side a denotes points in six-dimensional space of conductivities, a = fa ; a ; a 3 ; a ; a 3 ; a 33 g; u = fu ; u ; u 3 g; da and du are volume elements in a space and 4
5 u space, respectively. So, one can expect that the e ective characteristics and the statistics of local elds can be determined from the variational principle ae ij v i v j = min f(a;u) a ij (v i + u i ) (v j + u j ) f (a; u) dadu (6) where minimization is conducted over probabilistic measure f (a; u) : Function f (a; u) must obey some constraints. First of all, f(a; u) > 0; f(a; u)dadu = : For stationary ergodic elds, vanishing of space average of u i probability density as (5) is written in terms of u i f(a; u)dadu = 0: (7) Besides, integral f (a; u) over u must be equal to the one-point probability density of conductivities f (a) ; which is a prescribed material property, f (a; u) du = f (a) : (8) Another constraint stems from the fact that one-point probability density f (a; u) can be derived from two-point probability density, f (a; u; ; a 0 ; u 0 ) ; which is the joint probability density of values of random elds a(x) and u(x) at two points separated by vector : f (a; u) = f (a; u; ; a 0 ; u 0 ) da 0 du 0 : (9) On the other hand, two-point probability density f (a; u; ; a 0 ; u 0 ) is linked to two-point probability density of conductivities f (a; ; a 0 ), which is known because the material properties are prescribed: f (a; u; ; a 0 ; u 0 ) dudu 0 = f (a; ; a 0 ) : (0) Equations (9) and (0) impose constraints on admissible f (a; u) : There are other constraints as well. The complete set of constraints and the proof of the variational principle for probabilistic measure were given in (Berdichevsky 987) and discussed in detail in (Berdichevsky 009). The set of constraints turns out to be in nite. Therefore, to employ the variational principle one has either to provide a su ciently rich set of trial probability densities, obeying 5
6 to all constraints, or truncate the chain of constraints and work within an approximate setting. In the paper the examples are given for both ways. One example is concerned with multi-phase isotropic composites, conductivities of which are distributed lognormally. The lognormal distribution has two parameters, which usually give enough freedom to approximate distributions of positive numbers with one peak. The case of lognormally distributed conductivities is interesting also from another perspective: the Hashin-Shtrikman bounds for such conductivities degenerate. We choose the trial local electric eld and current to be Gaussian elds. For such elds minimization can be done analytically. We obtain the statistical characteristics which have not been available previously: probability densities of electric elds, currents, variances, correlation functions, etc. As a by-product we get the bounds for the e ective coe cient, which turn out to be quite tight. Second example employs the truncation of the in nite chain of constraints. The simplest truncation keeping only one-point probability density yields a trivial result: the Reuss approximation. The rst non-trivial truncation is to keep the constraints for one-point and two-point probability densities. However, the truncation involves a di cult issue: it turns out that the in nite chain of constraints imposes some implicit conditions on two-point probability density. When we drop the in nite tail of constraints, these implicit conditions must be made explicit. One such explicit condition, the positive de niteness condition, is obtained in the paper. It remains unknown what is the complete set of explicit conditions for two-point probability density. Within the truncated variational problem, we study two-dimensional two-phase composites. The minimizer is obtained, which obeys to constraints approximately. It provides the one-point and two-point statistics of local elds and the e ective coe cients. The variational problem for probabilistic measure does not contain small parameters and, most likely, can be solved only numerically. The examples considered aim to illustrate the peculiarities of the problem and the scope of statistical information involved. In the next Section a complete formulation of the variational principle is given. This is followed by study of composites with lognormal conductivities in Section 3, discussion of truncation in Section 4 (linear case) and Section 5 (nonlinear case), introduction of statistical characteristics of two-phase composites in Section 6, and construction of an approximate solution in Section 7. Most of technicalities are moved to Appendices A-D. 6
7 II. THE VARIATIONAL PRINCIPLE FOR PROBABILISTIC MEASURE We begin from the reminder of the basic facts from random eld theory (an excellent introduction to this subject from the perspective of applications is given by Rytov et al. 989). A random eld u(x) in unbounded space is completely described by an in nite chain of probability densities: one-point probability density f(x; u), two-point probability density f(x; u; x 0 ; u 0 ); etc. One-point probability density f(x; u) is the probability density of the values of random eld u at point x, two-point probability density f(x; u; x 0 ; u 0 ) is the joint probability density of the values u, u 0, which the random eld takes at points x and x 0, etc. Probability densities are symmetric with respect to transposition of any pairs fx; ug and fx 0 ; u 0 g; f (x; u; x 0 ; u 0 ) = f (x 0 ; u 0 ; x; u) ; f (x; u; x 0 ; u 0 ; x 00 ; u 00 ) = f (x 0 ; u 0 ; x; u; x 00 ; u 00 ) = f (x 0 ; u 0 ; x 00 ; u 00 ; x; u) ; etc. () For statistically stationary elds, which we consider, these functions are invariant with respect to translations. Thus one-point probability density does not depend on x, two-point probability density is a function of the shift = x 0 x; f(x; u; x 0 ; u 0 ) = f(u; ; u 0 ); () and similar relations hold for all multi-point probability densities. Translation invariance () combined with the transposition symmetry () yields the relation The probability densities must be positive f(u; ; u 0 ) = f(u 0 ; ; u): (3) f(u) > 0; f(u; ; u 0 ) > 0; ::: (4) and compatible, i.e. integration of n point probability density gives (n ) point probability density: f(u) = f(u; ; u 0 )du 0 ; f(u; ; u 0 ) = f(u; ; u 0 ; 0 u 00 )du 00 ; etc. (5) Besides, one-point probability density is normalized to unity, f(u)du = : (6) 7
8 The random eld of material characteristics a ij is completely described by the family of probability densities f(a); f(a; ; a 0 ); ::: that obey constraints ()-(6) (with u replaced by a). These probability densities are assumed to be known. The energy functional depends on two random elds, a ij (x) and u i (x). These elds are described by the family of joint probability densities, f(a; u); f(a; u; ; a 0 ; u 0 ); :::. They must be positive, f(a; u) > 0; f(a; u; ; a 0 ; u 0 ) > 0; ::: (7) possess the transposition symmetries, f(a; u; ; a 0 ; u 0 ) = f(a 0 ; u 0 ; ; a; u); ::: (8) and obey the compatibility constraints, R f(a; u; ; a 0 ; u 0 )da 0 du 0 = f (a; u) ; R f(a; u; ; a 0 ; u 0 ; 0 ; a 00 ; u 00 )da 00 du 00 = f (a; u; ; a 0 ; u 0 ) ;... (9) Besides, they are linked to the known probability densities of material characteristics R f(a; u)du = f(a); R f(a; u; ; a 0 ; u 0 )dudu 0 = f(a; ; a 0 );... (0) There is one more essential constraint: the random vector eld u i must be potential. The condition of potentiality of random elds is well-known in turbulence theory (see, e.g. Monin and Yaglom 966 or Berdichevsky 009, Sect. 6.8). It involves the correlation tensor of the random eld u i (x). B ij () = Mu i (x)u j (x + ) ; Here M denotes mathematical expectation. A stationary random eld u i is potential if and only if there exists a function B() such that the correlation tensor of the vector eld has the form B ij () j ; () and B() has a non-negative Fourier transform B(k) = B()e ik d > 0: () 8
9 Here k is the scalar product of vectors and k; d = d d in D and d = d d d 3 in 3D. We follow the tradition of physical literature not to introduce a special notation for Fourier transform and mark the Fourier transform of a function of by changing the argument by wave number k; in this way a function and its Fourier transform can be viewed as two presentations of the same function. Note for future references the formula for inversion of Fourier transform, B() = B(k)e ik dv k ; where in D dv k = dk dk = () and in 3D dv k = dk dk dk 3 = () 3 : Function B() has the meaning of the correlation function of electric potential, B() = M (x) (x + ) : (3) Continuity of electric potential in probability follows from continuity of B () at = 0. Potentiality condition () takes especially simple form in terms of Fourier transforms: B ij (k) = B(k)k i k j (4) with B(k) > 0: (5) Potentiality of the eld u i imposes a constraint on the two-point probability density f(u; ; u 0 ); u i u 0 j f(u; ; u 0 )dudu 0 B() : j Due to the compatibility condition, f(a; u; ; a 0 ; u 0 )dada 0 = f(u; ; u 0 ); the potentiality constraint (6) can be written also as a constraint for the joint two-point probability density f(a; u; ; a 0 ; u 0 ) : B ij () = u i u 0 j f(a; u; ; a 0 ; u 0 )da du da 0 du 0 j : (7) Apparently, we have to include all the constraints (7)-(0) along with potentiality condition (7) into the variational problem for probability measure. We arrive at 9
10 Variational principle. The true probabilistic characteristics of the local eld minimize the functional I(f) = a ij (v i + u i ) (v j + u j ) f (a; u) dadu (8) on the set of probability densities obeying to (7), (7) and (7)-(0). The minimum value of this functional determines the e ective coe cients ae ij v i v j = min f I(f): (9) The reader is referred to (Berdichevsky 987, 009) for further details. The dual variational problem for probability measure arises from the variational problem which is dual to (3). It reads: a e ij v i v j = max p i jv j V p i v i a ij p ip j dv (30) where maximum is sought over all divergence-free vector elds p i (x) i = 0: (3) and a ij is the inverse tensor to a ij : The general solution of (3) is p i = e j k ; (3) e ijk being Levi-Civita symbols, i (x) arbitrary functions. Therefore, the dual problem (30) can be written also as ae ij v i v j = max i jv j V v i e k a ij e ikl e k m dv: n Applying the same reasoning as in the motivation of the variational principle for functional I(f) (8), we obtain Dual variational principle. The true probabilistic measure of local elds maximizes the functional J(f) = max v i e ijk jk f a ij e ikl kl e jmn mn f (a; ) dad ; (34) 0
11 where denotes the set of variables f ij g, d the volume element in nine-dimensional space of variables : The maximization is conducted over probability densities f (a; ) ; f (a; ; ; a 0 ; 0 ) ; ::: which are non-negative, obey to compatibility conditions, f (a; ; ; a 0 ; 0 ) da 0 d 0 = f (a; ) ; ::: (35) f (a; ) d = f(a); f (a; ; ; a 0 ; 0 ) da 0 d 0 = f (a; ) ; ::: (36) possess symmetry with respect to transposition of arguments, and satisfy the potentiality condition f (a; ; k; a 0 ; 0 ) ij kl dad da 0 d 0 = B im (k) k j k l ; (37) where Fourier transform of the correlation function of the eld i (x) ; B im (k) = M i (x) m (x + ) ; is a tensor with non-negative eigenvalues, i.e. for any and k; B im (k) i m > 0: The maximum value of J(f) determines the e ective coe cients, ae ij v i v j = max J(f): (38) f One peculiarity of the variational principles formulated is worth noting. Compare the chain of constraints in the variational principles with the Bogoljubov-Born-Green-Kirkwood- Ivon (BBGKI) chain of equations for probability densities in statistical mechanics of gases (see e.g. Ferziger, Kaper 97). At rst glance, the chain of constraints of the variational principles is simpler than BBGKI chain, because the constraints involving, say, one-point and two-point probability densities, do not contain n point probability densities with n > ; in BBGKI chain such constraints are not closed and involve also three-point probability density. The origin of the di erence between the two chains is the di erent meaning of the notion "n point probability densities" in gas theory, where "point" means "particle", and probability density is, in fact, the joint probability density of positions of n particles at the same instant. Therefore, in our terms, gas theory operates only with one-point probability densities of many variables. There are no "time shifts" in gas theory, that mimic "space shifts" for composites. In gas theory, truncation of BBGKI chain brings the closure problem: one This is why the positive de niteness condition, which we will consider in Section 4, does not arise in gas theory.
12 needs to express n point probability density in terms of (n ) point probability density. In composites, at rst glance, such problem does not arise, because the truncation, keeping n point probability densities, does not contain s point probability densities with s > n: In fact, the di culty is just moved to another issue: one has to establish a criterion that a function can be the probability density of some random eld. This is discussed in Section 4. III. LOGNORMAL CONDUCTIVITY In this Section we construct approximate solutions for composites consisting of many isotropic phases with lognormally distributed conductivities. Lognormality means that conductivity has the form a(x) = a 0 e (x) (39) where (x) is a Gaussian eld with zero mean value, a 0 a constant. All multi-point probability densities of conductivities can be found from the following feature of Gaussian elds, which can be considered also as the de nition of Gaussian elds: for any deterministic function ' (x) ; M exp i (x) ' (x) dx = exp A (x; x 0 ) ' (x) ' (x 0 ) dxdx 0 : (40) The quadratic form in the right-hand side of (40) must be non-negative: for any ' (x) ; A(x; x 0 )' (x) ' (x 0 ) dx dx 0 > 0: (4) Function A (x; x 0 ) has the meaning of the correlation function of the Gaussian eld (x;!) : A (x; x 0 ) = M (x) (x 0 ) : (4) Formula (4), as well as relations for other moments, follow from (40) by expansion of the exponential function in Taylor series. The coe cients of quadratic form (4) must be symmetric, A (x; x 0 ) = A (x 0 ; x) : (43) For stationary elds A (x; x 0 ) = A (x x 0 ) = A () : Due to symmetry (43), A () is an even function of : Accordingly, Fourier transform of A () ; A (k) ; is a real function. For stationary
13 elds, condition (4) takes a simple form in terms of Fourier transforms: A(k)'(k)' (k)dv k > 0; (44) where symbol denotes complex conjugation. Inequality (44) shows that non-negativeness of quadratic form (40) is equivalent to non-negativeness of A (k), A (k) > 0: (45) All statistical properties of a lognormal composite can be expressed in terms of one function A () : For example, to nd one-point probability density of a(x), we set in (40) ' (~x) = (~x x) : Then (40) gives the characteristic function for one-point distribution of (x) ; M exp [i (x) ] = exp A ; where A = A () : =0 Knowing the characteristic function, we obtain the one-point probability distribution of (x) ; P D() = exp [i] M exp [ i (x)] d = exp i A d = p A exp and the one-point probability density of a (x) f(a) = M (a a 0 exp [ (x)]) = = (a (a ~a) P D ln ~a d~a a 0 ~a = p Aa exp a 0 exp []) P D [] d ", # ln aa0 A : Similarly, setting in (40) ' (~x) = (~x x)+ 0 (~x x 0 ) ; one gets the characteristic function of two-point distribution of conductivity, and computes two-point probability density of a(x): In the same way all multi-point probability densities can be found. One can check that multipoint probability densities derived from (40) obey to all constraints on probability densities described in Section. Therefore, (40) de nes a random eld (x) indeed. All the details of computations for lognormal distributions are put in Appendix A. We consider the general case of statistically anisotropic composites. If the composite is statistically isotropic, then A() depends on through the length of vector ; jj ; while A(k) is a function of jkj : 3 A ;
14 We seek the approximate solutions for electric eld u i (x) from the assumption that the pair f; ug is a stationary Gaussian eld. This means that for any deterministic functions ' (x) ; i (x) = exp M exp i ( (x) ' (x) + u j (x) j (x)) dx = (A (x x 0 ) ' (x) ' (x 0 ) + C i (x x 0 ) ' (x) i (x 0 ) + B ij (x x 0 ) i (x) j (x 0 )) dxdx 0 : Expanding (46) in Taylor series, one determines the meaning of the two basic characteristics of the solution, functions C i () and B ij () : C i () = M (x) u i (x + ) ; B ij () = Mu i (x) u j (x + ) : The quadratic form in (46) must be non-negative. Similarly to (45), this condition brings the constraint on the correlation functions: for any k; ' and i (46) A (k) ' + C i (k) ' i + B ij (k) i j > 0: (47) Joint one-point probability density of and u i can be found from (46): choosing ' = (x x 0 ) ; i = i (x x 0 ) we get Me i(+ ju j ) = exp A + C i i + B ij i j ; (48) where C i = C i () ; =0 Bij = B ij () : (49) =0 Then f (; u) = e i i ju j ( A +C i i + B ij i j) d d 3 () 3 : (50) Function f (; u) can be computed from (50) explicitly, but it is more convenient to use it s integral form (50). Knowing one-point probability density (50), we can nd the functional to be minimized (see details in Appendix A) I = a v i v i + v i Ci + B ii + C i Ci : (5) In (5) a is the average value of conductivity a = a 0 e f (; u) ddu = a 0 e A= : (5) 4
15 The functional I is a functional of C i (k) and B ij (k); because the the constants, C i and B ij ; introduced by (49), can be expressed in terms of integrals of C i (k) and B ij (k) over wave numbers, C i = C i (k) dv k ; Bij = B ij (k) dv k : (53) We obtain the minimization problem for the functional I = v a i v i + (v i C i (k) + B ii (k)) dv k + C i (k)dv k C i (k)dv k on the set of all functions C i (k); B ij (k) which are subject to the constraint (47) and the potentiality condition (4), (5). This variational problem admits an exact solution. Indeed, from (47) and (4), for any k i ; ' and i A (k) ' + C i (k) i ' + B (k) (k i i ) > 0: (54) Let vector i be orthogonal to k i. Then the last term in (54) vanishes. It is necessary for positiveness of the left hand side that C i (k) i = 0 for any i that are orthogonal to k i : Hence, C i (k) must have the form C i (k) = C (k) k i : (55) Function C (k) must be such that for any '; A (k) ' + C (k) ' + B (k) > 0: Thus (C (k)) 6 A(k)B(k): (56) So, the problem is reduced to minimization of the functional I = v a i v i + C(k)v i k i + B(k) jkj dv k + C(k)k i dv k C(k)k i dv k (57) over all functions C(k) and B(k) obeying to the constraint (56). We will minimize functional (57) in two steps, rst by minimizing B ii = B ii (k)dv k = B(k) jkj dv k (58) 5
16 when the constants C i are given, C(k)k i dv k = C i ; (59) and then minimizing (57) over constants C i : Introducing Lagrange multipliers i for constraints (59) we have min B ii = min max B(k);C(k)(56);(59) B(k);C(k)(56) B(k)k dv k + i C(k)k i dv k i Ci : Notation B(k) (56) means that function B(k) satis es the constraint (56): Changing the order of minimization and maximization we arrive at the minimization problem for the function of k; B(k) jkj + C(k) i k i ; over functions C(k) and B(k) satisfying to (56): Obviously, minimum over C(k) is achieved p when C(k) takes the value A(k)B(k) if i k i > 0 and the value p A(k)B(k) if i k i < 0: Then with the minimizer min C(k)(56) Minimization over B(k) in (60) yields B(k) jkj + C(k) i k i = B(k)k p A(k)B(k) ji k i j (60) C(k) = p A(k)B(k)Sgn [ i k i ] : (6) with Accordingly, from (6) min B(k);C(k)(56) B(k) jkj ( i k i ) + C(k) i k i = A(k) 4 jkj (6) B(k) = A(k) ( ik i ) 4 jkj 4 : (63) C(k) = A(k) j ik i j Sgn [ i k i ] jkj = A(k) ik i jkj : (64) So, Denote by A ij the tensor min B ii = max 4 A ij = A(k) k ik j jkj dv k i j ici : (65) A(k) k ik j k dv k; (66) 6
17 and by A ( ) ij the tensor which is inverse to A ij : Then min B ii = A ( ) ij C icj (67) with the maximizer i = A ( ) ij C j : (68) Returning to the original variational problem, we get for the functional (5) the problem min a v i v i + v ici + A ( ) ij C ici + C ici = a ij Aij vi v j C i where A ij is the tensor which is inverse to A ( ) ij + ij : The minimizer is C i = A ij v j : (69) Since Gaussian random elds u i coe cients are admissible, we obtain the upper bound of e ective a e ij v i v j 6 a ij Aij vi v j : (70) The correlation tensor is determined by (4), (63), (68) and (70) B ij (k) = A(k) jkj 4 A mn A nr k m v r ki k j : (7) Correlation tensor in physical space B ij () is found from (7) by Fourier transform. particularly, variance of electric eld is In B ij = A(k) jkj 4 A mn A nr k m v r ki k j dv k : (7) The low bound is obtained similarly from the dual variational principle. Its derivation is put in Appendix B, while here we present the result: a e ij v i v j > ae A A ij v i v j ; (73) where tensor A ij is inverse to ij A ij ; A ij is inverse to ij + A ij ; A ij is inverse to A ij, and A ij = A ij A ij ; A ij being tensor (66). The corresponding correlation tensor of current is given in general case in Appendix B, and in isotropic case by (75). 7
18 If the composite is isotropic, then A(k) is a function of jkj ; and all the tensors involved can be computed in terms of the only constant A: Indeed, tensor A ij (66) is isotropic and, therefore, has the form const ij : Since the trace of this tensor is equal to A(k)dV k = A; we have A ij = 3 A ij : Then A ij = 3 A ij ; A ij = 3 ij = A; A ij = A A + 3 ij; A ij = + A 3 ij ; Aij A = 3 + A ij. Hence the e ective coe cient lies within the bounds A ae + A 3 6 a e 6 a + A=3 : (74) For small A; the two leading terms of the bound expansions over A coincide, and, therefore, (74) gives the leading terms of the expansion of the e ective coe cient over A: a e = a 3 a A: The bounds converge also for A! : The bounds are shown in Fig.. It is shown also in this Figure Reuss and Voigt bounds. The Hashin-Shtrikman bounds for lognormal composites degenerate (see Appendix C): the low Hashin-Shtrikman bounds coincides with Reuss bound, while the upper bound becomes meaningless. The reason is that lognormal composites contain tiny fractions of phases with very small and very large conductivities. In such situations Hashin-Shtrikman variational principle does not hold. For statistically isotropic composites the bounds involve only two parameters, the average value a and the variance of logarithm of conductivity A: For statistically anisotropic composites the entire correlation function enters the bounds through tensor A ij (66). Probability densities and correlation functions of electric eld u i and current p i are determined by the material correlation function A(): The plots of probability densities and correlation functions of electric eld and current shown in Figs. -4 correspond to statistically isotropic composites with the Gaussian material correlation function A() = Ae jj =`: 8
19 FIG. : Upper and low bounds (74) for dimensionless e ective conductivity a e =a (solid), and Voigt and Reuss bounds (dashed) as functions of variance A: FIG. : Probability densities of magnitudes of normal and longitudinal components of electric eld; the components are scaled to make the probability density material-independent. The correlation length ` is further eliminated by choosing dimensionless coordinates t i = i =`: Since A(k) = Ae jkj = () 3= ; 9
20 FIG. 3: a: Correlation of electric eld referred to variance, B = B ; as function of t for r = 0 (bottom curve) and function of r for t = 0 (upper curve). b: Trace of correlation tensor of B ii = B as function of t for r = 0 (bottom curve) and function of r for t = 0 (upper curve). the correlation tensors of electric eld and current become Ae B ij (k) = jkj = () 3= k i k j k m k n + A=3 jkj 4 v m v n (75) Ae B ij (k) = jkj = () 3= + A=3 im k i k m jkj jn k j k n jkj p m p n : Due to rotational invariance, () 3= e jkj = k ik j k m k n jkj 4 dv k = 5 ( ij mn + im jn + in jm ) ; () 3= e jkj = im k i k m jkj jn and the variances of electric eld and current are B ij = B ij (k)dv k = A 5 + A=3 Bij = B ij (k)dv k = k j k n jkj dv k = 4 5 ( ij mn + im jn + in jm ) ; v ij + v i v j 4A 5 + A=3 p ij + p i p j : Note that B ij and B ij are not symmetric with respect to rotations: the statistical properties of electric eld and current in the direction of the external electric eld and normal to this 0
21 FIG. 4: a: Correlation of current referred to variance, B = B ; as function of t for r = 0 (bottom curve) and function of r for t = 0 (upper curve). b: Trace of current correlation tensor referred to variance B ii = B as function of t for r = 0 (bottom curve) and function of r for t = 0 (upper curve). direction are di erent. Let us choose the coordinate frame in such a way that v = j! v j ; v = v 3 = 0: Then the variances of electric eld are B = A 5 + A=3 jvj ; B = B 33 = If electric eld is divided by the factor become material independent. They are shown in Fig.. A 5 + A=3 jvj : q A=5 jvj + A=3 ; then probability densities Denote the coordinate along the external electric eld by t ; and the radial coordinate in (t ; t 3 ) plane by r = p t + t 3: The plots of some components of correlation tensor of electric eld B ij and correlation tensor of current B ij as functions of t and r are shown in Fig. 3 and 4. Fig. 3a depicts functions B (t ; 0) =B (0; 0) and B (0; r) =B (0; 0) ; Fig. 3b the traces B ii (t ; 0) =B (0; 0) and B ii (0; r) =B (0; 0) ; Fig. 4a the functions B (t ; 0) =B (0; 0) and B (0; r) =B (0; 0) ; Fig. 4b the traces B ii (r ; 0) =B (0; 0) and B ii (0; r) =B (0; 0) : IV. TRUNCATION Dropping of an in nite chain of constraints has one important consequence. Due to Kolmogorov s theorem, the in nite chain of constraints for probability densities warrants the
22 existence of a random eld which possesses such probability densities. The in nite chain imposes implicitly some constraints on probability densities; in particular, not every function f(a; u; ; a 0 ; u 0 ) can be the two-point probability density of a random eld fa; ug: It turns out (see Appendix D) that f(a; u; ; a 0 ; u 0 ) must obey the positive de niteness condition. To formulate this condition, let us introduce the function g(a; u; ; a 0 ; u 0 ) = f(a; u; ; a 0 ; u 0 ) f (a; u) f (a 0 ; u 0 ) : (76) It is assumed that the values (a; u) and (a 0 ; u 0 ) of the random elds at two points become statistically independent if these points are far apart, i.e. f(a; u; ; a 0 ; u 0 )! f (a; u) f (a 0 ; u 0 ) as jj! : (77) Accordingly, g(a; u; ; a 0 ; u 0 ) decays as jj! : We assume that the decay is fast enough, so that Fourier transform of g(a; u; ; a 0 ; u 0 ) over ; g(a; u; k; a 0 ; u 0 ) = g(a; u; ; a 0 ; u 0 )e ik d; exists in a usual sense. In terms of g(a; u; ; a 0 ; u 0 ); the consistency condition (9) linking two-point and one-point probability densities becomes homogeneous: g(a; u; ; a 0 ; u 0 )da 0 du 0 = 0: (78) This relation holds also for Fourier transform, g(a; u; k; a 0 ; u 0 )da 0 du 0 = 0: (79) Positive de niteness condition. If f(a; u; ; a 0 ; u 0 ) is the two-point probability density of some random eld fa; ug; then for any function ' (a; u) with zero average, ' (a; u) da du = 0; (80) the inequality holds g(a; u; k; a 0 ; u 0 )' (a; u) ' (a 0 ; u 0 ) da du da 0 du 0 > 0: (8) If ' does not satisfy (80), then (8) is still valid due to "degeneracy" of g(a; u; k; a 0 ; u 0 ) (79). The condition (80) is imposed only to deal with the "essential degrees of freedom" of
23 g(a; u; k; a 0 ; u 0 ). Instead of (80), one can set any constraint eliminating the shift of ' (a; u) for a constant, for example, ' (a; u) w (a; u) da du = 0; (8) where w (a; u) is some weight function. The positive de niteness condition (8) makes the truncated variational problem quite peculiar: the variational problem now includes not only the "usual" constraints on the required function, but also the constraints on its Fourier transform. Note one consequence of (8). Due to vanishing of the average eld (7), the correlation tensor B ij () can be written in terms of g(a; u; ; a 0 ; u 0 ); B ij () = g(a; u; ; a 0 ; u 0 )u i u 0 j da du da 0 du 0 ; or, in terms of Fourier transforms, B ij (k) = g(a; u; k; a 0 ; u 0 )u i u 0 j da du da 0 du 0 : As follows from (8) for ' (a; u) = u i i ; i non-negative tensor, This is a well-known condition for the correlation tensor. are arbitrary constants, B ij (k) must be a B ij (k) i j > 0: (83) It is known (see, e.g. Monin, Yaglom 966) that condition (83) is not only necessary, but also su cient for B ij () to be the correlation tensor of some random eld u i : As to (8), it does not seem that the su ciency of (8) has been established. In fact, the author has not found in literature even discussion of a simpler issue, the necessity of (8). If (8) turns out to be also the su cient condition for the existence of a random eld u i, then the trial elds of the truncated variational principle provide also the bounds, otherwise they are just some approximate solutions. The potentiality constraint can be conveniently written in terms of Fourier transforms: g(a; u; k; a 0 ; u 0 )u i u 0 j da du da 0 du 0 = B(k)k i k j : (84) Apparently, for potential elds, (83) is reduced to non-negativeness of the correlation function (5). Inequality (5) is the common feature of all correlation functions of scalar random elds: a function B() is the correlation function of a stationary scalar random eld if and only if it has a non-negative Fourier transform. 3
24 V. THE VARIATIONAL PRINCIPLE FOR PROBABILISTIC MEASURE IN NONLINEAR PROBLEMS An attractive feature of the approach described is that nonlinearity of the problem manifests itself in a trivial way: the only di erence from linear problems is the change of the "weight" in the functional to be minimized while the functional remains linear with respect to the probabilistic measure. If L (a; u i ) and L (v i ) are micro-lagrangian and macro-lagrangian, respectively, then functional I(f) (8) is replaced by the functional I(f) = L (a; u) f(a; u)dadu (85) and L (v i ) = min I(f): f(a;u) In the truncated problem the one-point probability density is subject to the constraints, f (a; u; ; a 0 ; u 0 ) da 0 du 0 = f(a; u) (86) f (a; u; ; a 0 ; u 0 ) > 0 (87) f (a; u; ; a 0 ; u 0 ) dudu 0 = f(a; ; a 0 ) (88) u i f (a; u) dadu = 0; (89) the potentiality constraint (7) and the positive-de niteness condition (8). The only point where "physics" enters the problem is the choice of the weight in the linear functional I(f); L (a; u) : Therefore, the analysis of the constraints given for linear problems is universal and can be applied to nonlinear problems as well. VI. TWO-PHASE COMPOSITES The rest of the paper is concerned with the construction of approximate solutions for two-phase isotropic composite materials. Even this most simple case is rich with open issues. First we discuss the general features of probability densities of material characteristics. 4
25 In two-phase composites the material characteristic a(x) takes only two values, a and a in the rst and second phase, respectively. One-point probability density is f(a) = c (a a ) + c (a a ) ; (90) c and c being volume concentration of phases. The two-point probability density of a (x) has the form f(a; ; a 0 ) = f () (a a ) (a 0 a ) + f () (a a ) (a 0 a ) + + f () (a a ) (a 0 a ) + f () (a a ) (a 0 a ) : (9) This can be derived from (90) and the compatibility condition f(a; ; a 0 )da 0 = f(a): (9) The compatibility condition (9) yields also a link between functions of ; f (); f (); f (); f () : f () + f () = c ; f () + f () = c : (93) Besides, the symmetry of f(a; ; a 0 ) with respect to the transposition of arguments reads f () = f ( ): (94) Note that characteristic function of phase one, (x), i.e. function equal to unity in phase one and zero in phase two, has the same f ; f and f as a (x): its two-point probability density f (a; ; a 0 ) is f (a; ; a 0 ) = f () (a ) (a 0 ) + f () (a ) (a 0 ) + f () (a) (a 0 ) + f () (a) (a 0 ) : Equations (93), (94) show, that there is only one independent characteristic of two-point statistics. As such one can choose function f (): Then f () = c f (); f () = c f ( ): (95) Indeed, f(a) = 0 for a 6= a ; a : Function f(a; ; a 0 ) is non-negative. Integral of non-negative function is zero only if the integrand is zero. Hence, f(a; ; a 0 ) = 0 if a 6= a ; a : From symmetry over a; a 0 ; f(a; ; a 0 ) = 0 if a 0 6= a ; a : Since integrals of f(a; ; a 0 ) over a; a 0 are nite, f(a; ; a 0 ) is a combination of functions. 5
26 Further we focus on structures which are statistically invariant with respect to mirror image. Accordingly, f ( ) = f (): We assume that for large j j the values of function a(x) become statistically independent. This means that f(a; ; a 0 )! f(a) f(a 0 ) as jj! : (96) The condition (96) yields f ()! c ; f ()! c c ; f ()! c as jj! : (97) As the major statistical characteristic of two-phase composites it is convenient to use instead of function f () the function h 0 () de ned by the relation f () = c c ( h 0 ()) ; h 0 ()! 0 as jj! : (98) In terms of h 0 () ; f and f are written as f () = c + c c h 0 () ; f () = c + c c h 0 () : (99) Function h 0 () ; up to the factor c c (a eld a(x): Indeed, a ) ; is the correlation function of the random M (a(x) Ma) (a (x + ) Ma) = Ma(x)a (x + ) (Ma) = aa 0 f (a; ; a 0 ) dada 0 (c a + c a ) = (00) = f ()a + f ()a a + f ()a (c a + c a ) = c c (a a ) h 0 () : Due to (98)-(00), there is one-to-one correspondence between two-point probability densities of material characteristics and the correlation function. This is a peculiarity of two-phase materials. As has been mentioned in Section 5, a function can be the correlation function of a random scalar eld if and only if its Fourier transform is non-negative. Therefore, h 0 (k) > 0: (0) Besides, due to the assumed stochastic mirror symmetry, h 0 () = h 0 ( ) ; and h 0 (k) is an even function of k: Inequality (0) is also a consequence of the positive de niteness condition 6
27 (8) written for material characteristics of a two-phase composite. Indeed, let us introduce function g (a; ; a 0 ) similarly to (76): g(a; ; a 0 ) = f(a; ; a 0 ) f (a) f (a 0 ) : Then, from (90), (9), (98) and (99) g(a; ; a 0 ) = c c h 0 () ( (a a ) (a a )) ( (a 0 a ) (a 0 a )) : Setting in the positive de niteness condition (9) ' (a; u) to be a function of only a, ' (a) ; and using that g (a; u; ; a 0 ; u 0 ) dudu 0 = g(a; ; a 0 ) we get from (8) g (a; k; a 0 ) ' (a) ' (a 0 ) dada 0 > 0: Plugging here the expression for g (a; k; a 0 ) ; we obtain c c h 0 (k) which is equivalent to (0). ( (a a ) (a a )) (a) da > 0 Positiveness of two-point probabilities impose an upper bound on h 0 () : h 0 () 6 : (0) Not every function h 0 () obeying to (0) and (0) can be a correlation function of twophase material. A summary of necessary condition on h 0 () found by now are given by S. Torquato (006). The above-mentioned equivalence of the positive-de niteness condition for two-phase composites to inequality (0) indicates that the positive-de niteness condition is not a su cient condition for existence of a two-phase microstructure with the prescribed two-point probability of material characteristics. We assume that f vanishes as! 0: This means that the measure of points of the second phase that are in contact with the rst phase is zero. Such are microgeometries with phases separated by surfaces like particulate composites or polycrystals. Then from (98) h 0 ()! as! 0: (03) 7
28 We accept additionally that h 0 () is continuous at = 0; and equal to unity at = 0: In terms of Fourier transform h 0 (k) this means that h 0 (k) dv k = : (04) One consequence of (03) is worth noting: from (9), (98) and (99) it follows that for! 0 f (a; ; a 0 )! c (a a ) (a 0 a ) + c (a a ) (a 0 a ) = = c (a a ) (a 0 a) + c (a a ) (a 0 a) = f (a) (a 0 a) : (05) This means that a(x) and a (x + ) coincide when! 0 with overwhelming probability. If phases are separated by interface surfaces, then derivatives of h 0 () are not smooth at = 0: Indeed, consider the random eld of (x) =@x i. The derivatives are functions concentrated on the interface surfaces. Hence, since for ergodic elds mathematical expectation coincides with the space average, On the other (x + ) M! as jj! (x 0 ) 0 j Thus, using (00) and setting i = x 0 i (x 0 ) 0 j Ma (x) a (x 0 ) 0 j x i ; we get = c c (a a h 0 j h 0 j! as jj! 0: The type of singularity at = 0 was established for isotropic materials by Debye, Anderson and Brumberger (957): h 0 () jj/ 4c c as jj = p i i! 0; (06) with having the simple geometrical meaning: is the interface area per unit volume. The type of singularity at = 0 determines the rate of decay of h 0 (k) as k! : h 0 (k) jkj 3 in D, h 0 (k) jkj 4 in 3D. 8
29 Several results on the behavior of h 0 () at = 0 are contained in papers by Kriste and Porod, 96, Frisch and Stillinger, 963 and Berryman 987. Interestingly, the next terms of expansion of h 0 () at = 0 also depend on geometrical characteristics of microstructures, like curvatures of interphase surfaces and average number of particle contacts in particulate composites. In summary, in the truncated variational problem the properties of a two-phase isotropic material are characterized by the constants a ; c ; a ; c and the function h 0 () : Function h 0 () coincides with what is called in physical literature Debye s X ray correlation function. Let us turn now to description of random eld u i (x) : The joint one-point probability density of two elds, a and u, has the form f (a; u) = f (u) (a a ) + f (u) (a a ) ; (07) where f (u) and f (u) are probability densities of u in the rst and the second phase, respectively. Obviously, they must be such that f (u) > 0; f (u) du = c ; f (u) > 0; f (u) du = c : (08) The joint two-point probability density of two elds, a and u; similarly to (9), is f(a; u; ; a 0 ; u 0 ) = f (u; ; u 0 ) (a a ) (a 0 a ) + f (u; ; u 0 ) (a a ) (a 0 a ) +f (u; ; u 0 ) (a a ) (a 0 a ) + f (u; ; u 0 ) (a a ) (a 0 a ) : (09) Here, due to symmetry with respect to transposition of fa; ug and fa 0 ; u 0 g and the assumed stochastic mirror symmetry, f (u; ; u 0 ) = f (u 0 ; ; u) : The compatibility condition (88) yields the constraint for f ; f and f (f (u; ; u 0 ) + f (u; ; u 0 )) du 0 = f (u) (f (u; ; u 0 ) + f (u; ; u 0 )) du 0 = f (u) : (0) Besides, from the compatibility condition f(a; u; ; a 0 ; u 0 )dudu 0 = f(a; ; a 0 ); 9
30 it follows that R f dudu 0 = c + c c h 0 () R f dudu 0 = c c ( h 0 ()) : () R f dudu 0 = c + c c h 0 () These constraints take a simpler form if we make a change of required functions, f! g ; f! g ; f! g : g (u; ; u 0 ) = f (u; ; u 0 ) f (u)f (u 0 ) g (u; ; u 0 ) = f (u; ; u 0 ) f (u)f (u 0 ) = g (u 0 ; ; u) : g (u; ; u 0 ) = f (u; ; u 0 ) f (u)f (u 0 ) Then the constraints (0) become homogeneous (g + g ) du 0 = 0; (g + g ) du 0 = 0: () while () simpli es to g dudu 0 = c c h 0 ; g dudu 0 = c c h 0 ; g dudu 0 = c c h 0 : (3) Positiveness of probabilities f ; f ; f impose low bounds on g ; g ; g g (u; ; u 0 ) + f (u)f (u 0 ) > 0 g (u; ; u 0 ) + f (u)f (u 0 ) > 0 : (4) g (u; ; u 0 ) + f (u)f (u 0 ) > 0 There is also the positive de niteness condition (8), which reads for two-phase composites [g (u; k; u 0 ) '(u)'(u 0 ) + g (u; k; u 0 ) '(u) (u 0 )+ +g (u; k; u 0 ) (u) (u 0 )] dudu 0 > 0: (5) Here we denoted by ' (u) and (u) the functions ' (a ; u) and ' (a ; u) ; respectively. To specify the no-shift constraint (8) we take the weight function w (a; u) as w (a; u) = (a a ) + (a a ) : Then (' (u) + (u)) du = 0: (6) 30
31 We have to include also into the set of constraints the condition of vanishing of average eld (89), u i f (u)du + u i f (u)du = 0; (7) and the potentiality condition (7), which, due to (89), can be written in terms of g ; g ; g as (g (u; ; u 0 ) + g (u; ; u 0 ) + g (u; ; u 0 )) u i u 0 jdudu 0 j : (8) So, the random eld u i is characterized in the truncated problem by functions f (u); f (u) and functions g (u; ; u 0 ) ; g (u; ; u 0 ) and g (u; ; u 0 ) that are subject to constraints (), (4), (5), (7), (08) and (8). VII. CONVOLUTIONAL APPROXIMATION All the constraints for two-point probability densities can be split in two groups, the constraints that have simple form in terms of Fourier transforms, and the conditions of positiveness of two-point probability densities. The idea of the approximation of this Section is to satisfy the constraints for Fourier transforms precisely with some free parameters and then get the constraints for parameters from the positiveness conditions. So far we had only one constraint (5) written in terms of Fourier transforms. Other constraints (), (3) and (8) can also be presented as constraints for Fourier transforms: (g (u; k; u 0 ) + g (u; k; u 0 )) du 0 = 0; (g (u; k; u 0 ) + g (u; k; u 0 )) du 0 = 0 (9) g (u; k; u 0 ) dudu 0 = c c h 0 (k) ; g (u; k; u 0 ) dudu 0 = c c h 0 (k) ; g (u; k; u 0 ) dudu 0 = c c h 0 (k) ; (0) (g (u; k; u 0 ) + g (u; k; u 0 ) + g (u; k; u 0 )) u i u 0 jdudu 0 = B(k)k i k j ; B(k) > 0: () The remaining constraints for g ; g ; g are the positiveness conditions of f ; f ; f (4). If g ; g ; g are chosen, then (4) can be viewed as constraints for one-point probability densities. We put g (u; k; u 0 ) = c c h 0 (k) G (u; k) G (u 0 ; k) ; g (u; k; u 0 ) = c c h 0 (k) G (u; k) G (u 0 ; k) ; () 3
32 g (u; k; u 0 ) = c c h 0 (k) G (u; k) G (u 0 ; k) ; where G (u; k) and G (u; k) are some functions to be found. To satisfy (0) these functions must be normalized to unity G (u; k) du = ; G (u; k) du = : (3) Then functions () obey also (9). The positive de niteness condition (5) holds as well because [g ' (u) ' (u 0 ) + g ' (u) (u 0 ) + g (u) (u 0 )] dudu 0 = = c c h 0 G (u; k) ' (u) du G (u; k) (u) du > 0: Potentiality condition () brings the constraint for the rst moments of G ; G p i (k) = G (u; k) u i du and q i (k) = G (u; k) u i du: (4) We have, c c h 0 (k) (p i (k) q i (k)) (p j (k) q j (k)) = B (k) k i k j : (5) Equation (5) means that there exists a function ' (k) such that 3 q i (k) p i (k) = ' (k) k i : (6) Function ' (k) is linked to the eld correlation function, c c h 0 (k) ' (k) = B (k) : It is convenient to re-de ne function ' replacing ' in (6) by '= jkj : q i (k) p i (k) = ' (k) k i = jkj : (7) Accordingly, B (k) = c c h 0 (k) ' (k) = jkj : (8) 3 Projecting (5) on vectors orthogonal to k i we nd that such projections of q i p i are zero. Therefore, q i p i and k i are collinear vectors. 3
33 The remarkable feature of the trial functions () is that the one-point probability densities f and f are uniquely determined in terms of G (u; k) and G (u; k) : f (u) = c G (u; k) h 0 (k) dv k ; f (u) = c G (u; k) h 0 (k) dv k : (9) Indeed, consider the constraint f > 0: It reads f (u) f (u 0 ) > c c G (u; k) G (u 0 ; k) h 0 (k) e ik dv k : (30) Inequality (30) must hold for any value of : First, let us set = 0: We get f (u) f (u 0 ) > c c G (u; k) G (u 0 ; k) h 0 (k) dv k : (3) Integrating (3) over u 0 and using (3), (08) we obtain f (u) > c G (u; k) h 0 (k) dv k : (3) Assume that the inequality (3) is strict on a set of u with nonzero measure. Then integral of the left hand side of (3) over u is larger than integral of the right hand side. On the other hand, the integrals over u of both sides of (3) are equal to c due to (3), (04) and (08), and we arrive at a contradiction. Thus, only equality sign is possible in (3). Similarly, integrating (3) over u one gets the second equation (9). The choice of possible functions G (u; k) and G (u; k) is constrained by positiveness of one-point and two-point probabilities, f ; f ; f ; f and f. From (4) for all u, u 0 and G (u; k) h 0 (k) dv k > 0; G (u; k) h 0 (k) dv k > 0 (33) c c G (u; k) h 0 (k) dv k G (u; k) h 0 (k) dv k G (u; k) h 0 (k) dv k G (u 0 ; k) h 0 (k) dv k + c c G (u 0 ; k) h 0 (k) dv k > G (u 0 ; k) h 0 (k) dv k + c c G (u; k) G (u 0 ; k) e ik h 0 (k) dv k > 0; (34) G (u; k) G (u 0 ; k) e ik h 0 (k) dv k ; (35) G (u; k) G (u 0 ; k) e ik h 0 (k) dv k > 0: (36) Besides, vanishing of average eld (7) yields the equation u i (c G (u; k) + c G (u; k)) h 0 (k) dv k du = 0: (37) 33
APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationEnergy of dislocation networks
Energy of dislocation networks ictor L. Berdichevsky Mechanical Engineering, Wayne State University, Detroit MI 48 USA (Dated: February, 6) Abstract It is obtained an expression for energy of a random
More informationTaylor series - Solutions
Taylor series - Solutions. f(x) sin(x) sin(0) + x cos(0) + x x ( sin(0)) +!! ( cos(0)) + + x4 x5 (sin(0)) + 4! 5! 0 + x + 0 x x! + x5 5! x! + 0 + x5 (cos(0)) + x6 6! ( sin(0)) + x 7 7! + x9 9! 5! + 0 +
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More information2 GOVERNING EQUATIONS
2 GOVERNING EQUATIONS 9 2 GOVERNING EQUATIONS For completeness we will take a brief moment to review the governing equations for a turbulent uid. We will present them both in physical space coordinates
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationDuality, Dual Variational Principles
Duality, Dual Variational Principles April 5, 2013 Contents 1 Duality 1 1.1 Legendre and Young-Fenchel transforms.............. 1 1.2 Second conjugate and convexification................ 4 1.3 Hamiltonian
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationECON2285: Mathematical Economics
ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal
More information4.3 - Linear Combinations and Independence of Vectors
- Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be
More informationWarped Products. by Peter Petersen. We shall de ne as few concepts as possible. A tangent vector always has the local coordinate expansion
Warped Products by Peter Petersen De nitions We shall de ne as few concepts as possible. A tangent vector always has the local coordinate expansion a function the di erential v = dx i (v) df = f dxi We
More information2 Formulation. = arg = 2 (1)
Acoustic di raction by an impedance wedge Aladin H. Kamel (alaahassan.kamel@yahoo.com) PO Box 433 Heliopolis Center 11757, Cairo, Egypt Abstract. We consider the boundary-value problem for the Helmholtz
More informationTMA 4195 Mathematical Modelling December 12, 2007 Solution with additional comments
Problem TMA 495 Mathematical Modelling December, 007 Solution with additional comments The air resistance, F, of a car depends on its length, L, cross sectional area, A, its speed relative to the air,
More informationScattering for the NLS equation
Scattering for the NLS equation joint work with Thierry Cazenave (UPMC) Ivan Naumkin Université Nice Sophia Antipolis February 2, 2017 Introduction. Consider the nonlinear Schrödinger equation with the
More informationMean-Variance Utility
Mean-Variance Utility Yutaka Nakamura University of Tsukuba Graduate School of Systems and Information Engineering Division of Social Systems and Management -- Tennnoudai, Tsukuba, Ibaraki 305-8573, Japan
More informationAn acoustic wave equation for orthorhombic anisotropy
Stanford Exploration Project, Report 98, August 1, 1998, pages 6?? An acoustic wave equation for orthorhombic anisotropy Tariq Alkhalifah 1 keywords: Anisotropy, finite difference, modeling ABSTRACT Using
More informationa11 a A = : a 21 a 22
Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are
More informationThe groups SO(3) and SU(2) and their representations
CHAPTER VI The groups SO(3) and SU() and their representations Two continuous groups of transformations that play an important role in physics are the special orthogonal group of order 3, SO(3), and the
More informationConservation of total momentum
Conservation of total momentum We will here prove that conservation of total momentum is a consequence of translational invariance. L3:1 TM1 Taylor: 268-269 Consider N particles with positions given by.
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More informationCovariant Formulation of Electrodynamics
Chapter 7. Covariant Formulation of Electrodynamics Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 11, and Rybicki and Lightman, Chap. 4. Starting with this chapter,
More information8 Periodic Linear Di erential Equations - Floquet Theory
8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions
More informationNotes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770
Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 77 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill October 4, 2 MATHEMATICAL
More informationTransition Density Function and Partial Di erential Equations
Transition Density Function and Partial Di erential Equations In this lecture Generalised Functions - Dirac delta and heaviside Transition Density Function - Forward and Backward Kolmogorov Equation Similarity
More information5 Constructions of connections
[under construction] 5 Constructions of connections 5.1 Connections on manifolds and the Levi-Civita theorem We start with a bit of terminology. A connection on the tangent bundle T M M of a manifold M
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationCentral Limit Theorem for Non-stationary Markov Chains
Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities
More informationKinematics of fluid motion
Chapter 4 Kinematics of fluid motion 4.1 Elementary flow patterns Recall the discussion of flow patterns in Chapter 1. The equations for particle paths in a three-dimensional, steady fluid flow are dx
More informationLinearized methods for ordinary di erential equations
Applied Mathematics and Computation 104 (1999) 109±19 www.elsevier.nl/locate/amc Linearized methods for ordinary di erential equations J.I. Ramos 1 Departamento de Lenguajes y Ciencias de la Computacion,
More informationTHE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION
THE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION JYOTI DAS, W.N. EVERITT, D.B. HINTON, L.L. LITTLEJOHN, AND C. MARKETT Abstract. The Bessel-type functions, structured as extensions of the classical Bessel
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More informationAn introduction to quantum stochastic calculus
An introduction to quantum stochastic calculus Robin L Hudson Loughborough University July 21, 214 (Institute) July 21, 214 1 / 31 What is Quantum Probability? Quantum probability is the generalisation
More informationEssential Mathematics for Economists
Ludwig von Auer (Universität Trier) Marco Raphael (Universität Trier) October 2014 1 / 343 1 Introductory Topics I: Algebra and Equations Some Basic Concepts and Rules How to Solve Simple Equations Equations
More informationECON0702: Mathematical Methods in Economics
ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 14, 2009 Luo, Y. (SEF of HKU) MME January 14, 2009 1 / 44 Comparative Statics and The Concept of Derivative Comparative Statics
More informationON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS
ON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS TOMAS GODOY, JEAN-PIERRE GOSSE, AND SOFIA PACKA Abstract. This paper is concerned with nonselfadjoint elliptic problems
More informationAbsolute Equilibrium Entropy
To appear in the Journal of Plasma Physics Absolute Equilibrium Entropy By JOHN V. SHEBALIN 1 National Aeronautics and Space Administration, Langley Research Center, Hampton, Virginia 3681, USA The entropy
More informationIntroduction to the J-integral
Introduction to the J-integral Instructor: Ramsharan Rangarajan February 24, 2016 The purpose of this lecture is to briefly introduce the J-integral, which is widely used in fracture mechanics. To the
More informationIt is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;
4 Calculus Review 4.1 The Utility Maimization Problem As a motivating eample, consider the problem facing a consumer that needs to allocate a given budget over two commodities sold at (linear) prices p
More informationApproximate Solutions of the Grad-Schlüter-Shafranov Equation
Approximate Solutions of the Grad-Schlüter-Shafranov Equation Gerson Otto Ludwig Associated Plasma Laboratory, National Space Research Institute 17-010, São José dos Campos, SP, Brazil ludwig@plasma.inpe.br
More information(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1
Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,
More informationMathematical Preliminaries
Mathematical Preliminaries Introductory Course on Multiphysics Modelling TOMAZ G. ZIELIŃKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Vectors, tensors, and index notation. Generalization of the concept
More information01 Harmonic Oscillations
Utah State University DigitalCommons@USU Foundations of Wave Phenomena Library Digital Monographs 8-2014 01 Harmonic Oscillations Charles G. Torre Department of Physics, Utah State University, Charles.Torre@usu.edu
More information4 Evolution of density perturbations
Spring term 2014: Dark Matter lecture 3/9 Torsten Bringmann (torsten.bringmann@fys.uio.no) reading: Weinberg, chapters 5-8 4 Evolution of density perturbations 4.1 Statistical description The cosmological
More informationFundamentals of Linear Elasticity
Fundamentals of Linear Elasticity Introductory Course on Multiphysics Modelling TOMASZ G. ZIELIŃSKI bluebox.ippt.pan.pl/ tzielins/ Institute of Fundamental Technological Research of the Polish Academy
More informationStochastic Processes (Master degree in Engineering) Franco Flandoli
Stochastic Processes (Master degree in Engineering) Franco Flandoli Contents Preface v Chapter. Preliminaries of Probability. Transformation of densities. About covariance matrices 3 3. Gaussian vectors
More informationWaves on 2 and 3 dimensional domains
Chapter 14 Waves on 2 and 3 dimensional domains We now turn to the studying the initial boundary value problem for the wave equation in two and three dimensions. In this chapter we focus on the situation
More informationParametric Inference on Strong Dependence
Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation
More informationLatin Squares and Their Applications
Latin Squares and Their Applications Jason Tang Mentor: Wendy Baratta October 27, 2009 1 Introduction Despite Latin Squares being a relatively unknown aspect of mathematics, there are many interesting
More informationLECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT
MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationMath Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space
Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest
More information1 The Linearized Einstein Equations
The Linearized Einstein Equations. The Assumption.. Simplest Version The simplest version of the linearized theory begins with at Minkowski spacetime with basis vectors x and metric tensor components 8
More informationReconstruction of the refractive index by near- eld phaseless imaging
Reconstruction of the refractive index by near- eld phaseless imaging Victor Palamodov November 0, 207 Abstract. An explicit method is described for reconstruction of the complex refractive index by the
More informationSpectral representations and ergodic theorems for stationary stochastic processes
AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods
More informationEconomics 241B Review of Limit Theorems for Sequences of Random Variables
Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence
More informationLinear Algebra Review (Course Notes for Math 308H - Spring 2016)
Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More informationPDEs, part 1: Introduction and elliptic PDEs
PDEs, part 1: Introduction and elliptic PDEs Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Partial di erential equations The solution depends on several variables,
More informationMapping Closure Approximation to Conditional Dissipation Rate for Turbulent Scalar Mixing
NASA/CR--1631 ICASE Report No. -48 Mapping Closure Approximation to Conditional Dissipation Rate for Turbulent Scalar Mixing Guowei He and R. Rubinstein ICASE, Hampton, Virginia ICASE NASA Langley Research
More informationMultiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions
Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................
More informationMA 8101 Stokastiske metoder i systemteori
MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this
More informationThe Matrix Representation of a Three-Dimensional Rotation Revisited
Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of
More informationIn this section, thermoelasticity is considered. By definition, the constitutive relations for Gradθ. This general case
Section.. Thermoelasticity In this section, thermoelasticity is considered. By definition, the constitutive relations for F, θ, Gradθ. This general case such a material depend only on the set of field
More informationSome Explicit Solutions of the Cable Equation
Some Explicit Solutions of the Cable Equation Marco Herrera-Valdéz and Sergei K. Suslov Mathematical, Computational and Modeling Sciences Center, Arizona State University, Tempe, AZ 85287 1904, U.S.A.
More informationRotational motion of a rigid body spinning around a rotational axis ˆn;
Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with
More informationImplicit Function Theorem: One Equation
Natalia Lazzati Mathematics for Economics (Part I) Note 3: The Implicit Function Theorem Note 3 is based on postol (975, h 3), de la Fuente (2, h5) and Simon and Blume (994, h 5) This note discusses the
More informationz = f (x; y) f (x ; y ) f (x; y) f (x; y )
BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most
More informationEuler-Lagrange's equations in several variables
Euler-Lagrange's equations in several variables So far we have studied one variable and its derivative Let us now consider L2:1 More:1 Taylor: 226-227 (This proof is slightly more general than Taylor's.)
More informationA Method of Successive Approximations in the Framework of the Geometrized Lagrange Formalism
October, 008 PROGRESS IN PHYSICS Volume 4 A Method of Successive Approximations in the Framework of the Geometrized Lagrange Formalism Grigory I. Garas ko Department of Physics, Scientific Research Institute
More informationDispersion relations, stability and linearization
Dispersion relations, stability and linearization 1 Dispersion relations Suppose that u(x, t) is a function with domain { < x 0}, and it satisfies a linear, constant coefficient partial differential
More informationNon-degeneracy of perturbed solutions of semilinear partial differential equations
Non-degeneracy of perturbed solutions of semilinear partial differential equations Robert Magnus, Olivier Moschetta Abstract The equation u + F(V (εx, u = 0 is considered in R n. For small ε > 0 it is
More informationStochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.
Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of
More informationLie Groups for 2D and 3D Transformations
Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and
More informationMathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric
Mathematics that Every Physicist should Know: Scalar, Vector, and Tensor Fields in the Space of Real n- Dimensional Independent Variable with Metric By Y. N. Keilman AltSci@basicisp.net Every physicist
More informationNIELINIOWA OPTYKA MOLEKULARNA
NIELINIOWA OPTYKA MOLEKULARNA chapter 1 by Stanisław Kielich translated by:tadeusz Bancewicz http://zon8.physd.amu.edu.pl/~tbancewi Poznan,luty 2008 ELEMENTS OF THE VECTOR AND TENSOR ANALYSIS Reference
More informationMATHEMATICAL PROGRAMMING I
MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationSimple Estimators for Semiparametric Multinomial Choice Models
Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper
More informationMeasuring robustness
Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely
More informationChapter 5 Linear Programming (LP)
Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider
More informationGlobal Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations
Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology
More informationFUNDAMENTAL AND CONCEPTUAL ASPECTS OF TURBULENT FLOWS
FUNDAMENTAL AND CONCEPTUAL ASPECTS OF TURBULENT FLOWS Arkady Tsinober Professor and Marie Curie Chair in Fundamental and Conceptual Aspects of Turbulent Flows Institute for Mathematical Sciences and Department
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationWidely applicable periodicity results for higher order di erence equations
Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu
More informationA metric space is a set S with a given distance (or metric) function d(x, y) which satisfies the conditions
1 Distance Reading [SB], Ch. 29.4, p. 811-816 A metric space is a set S with a given distance (or metric) function d(x, y) which satisfies the conditions (a) Positive definiteness d(x, y) 0, d(x, y) =
More informationIntroduction LECTURE 1
LECTURE 1 Introduction The source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of seemingly great generality is in
More information= a. a = Let us now study what is a? c ( a A a )
7636S ADVANCED QUANTUM MECHANICS Solutions 1 Spring 010 1 Warm up a Show that the eigenvalues of a Hermitian operator A are real and that the eigenkets of A corresponding to dierent eigenvalues are orthogonal
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationSCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course
SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self-paced Course MODULE ALGEBRA Module Topics Simplifying expressions and algebraic functions Rearranging formulae Indices 4 Rationalising a denominator
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More information! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»»
Topic 6: Coupled Oscillators and Normal Modes Reading assignment: Hand and Finch Chapter 9 We are going to be considering the general case of a system with N degrees of freedome close to one of its stable
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationA.1 Appendix on Cartesian tensors
1 Lecture Notes on Fluid Dynamics (1.63J/2.21J) by Chiang C. Mei, February 6, 2007 A.1 Appendix on Cartesian tensors [Ref 1] : H Jeffreys, Cartesian Tensors; [Ref 2] : Y. C. Fung, Foundations of Solid
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLecture 3 - Axioms of Consumer Preference and the Theory of Choice
Lecture 3 - Axioms of Consumer Preference and the Theory of Choice David Autor 14.03 Fall 2004 Agenda: 1. Consumer preference theory (a) Notion of utility function (b) Axioms of consumer preference (c)
More informationUniversity of Toronto
A Limit Result for the Prior Predictive by Michael Evans Department of Statistics University of Toronto and Gun Ho Jang Department of Statistics University of Toronto Technical Report No. 1004 April 15,
More informationIntroduction: structural econometrics. Jean-Marc Robin
Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider
More information