2 Viktor G. Kurotschka, Rainer Schwabe. 1) In the case of a small experimental region, mathematically described by

Size: px
Start display at page:

Download "2 Viktor G. Kurotschka, Rainer Schwabe. 1) In the case of a small experimental region, mathematically described by"

Transcription

1 HE REDUION OF DESIGN PROLEMS FOR MULIVARIAE EXPERIMENS O UNIVARIAE POSSIILIIES AND HEIR LIMIAIONS Viktor G Kurotschka, Rainer Schwabe Freie Universitat erlin, Mathematisches Institut, Arnimallee 2{6, D-4 95 erlin, Germany ASRA Situations are exhibite in which optimum esigns for univariate moels remain optimum for multiresponse moels Introuction A multivariate experimental situation is ene by the fact, that for each of the more or less complex experimental conitions t 2 the observation X(t) is multivariate, i e one observes instea of real value ranom variables r- imensional real value vectors X(t) = (X () (t); ; X (r) (t)); t 2 ; () with unknown location, here for simplicity expecte responses (t) := E(X(t)) = (E(X () (t)); ; E(X (r) (t))) 2 IR r ; t 2 ; (2) an correlation structure cov(x(t)) = 2 IR s;+ rr (the set of symmetric, positive enite r r matrices) which is generally assume to be invariant uner change of experimental conitions an replications, because it represents the interior relationship of the components ening the r-imensional vector of observations Despite the ierent non-linear approaches to experiments uner even less complex experimental conitions t 2 the most appropriate P setting is still p apparently a linear parametrization = ( ; :::; p ) of = = a, for several reasons: research supporte by grant Ku79/2- of the Deutsche Forschungsgemeinschaft

2 2 Viktor G Kurotschka, Rainer Schwabe ) In the case of a small experimental region, mathematically escribe by a nite set (which covers the whole area of the classically calle analysis of variance moels or moels with qualitative factors), the practically most useful parametrizations are necessarily linear 2) In the case of a large set, which is suciently well escribe by a convex compact subset of IR K, the transparent approach via an appropriate system of aequate regression functions a ; :::; a p on, approximating the unknown response of the experiment, is still the most attractive known, in particular, in view of the easier mathematical treatment which uses continuous mathematics instea of nite mathematics like erivatives an integrals instea of ierences an sums 3) he esign problem in the non-linear approaches can be almost always aequately reuce to appropriate esign problems arising from linearly parametrize responses of an experiment he optimality of esigns for multivariate experiments are obtaine, here, by a reuction to the associate univariate problems, which themselves may be of a complex structure involving qualitative an quantitative factors with ierent types of interactions his means that, throughout the paper, the experimental region is kept on the most general level incluing, in particular, experimental regions = K k= f; :::; I kg ; (3) where is a convex subset of IR K2 he latter is the most general representation of the most realistic experimental situation, in which K qualitative factors are operating on levels ; :::; I k, k = ; :::; K, an K 2 quantitative factors are operating on levels in For those situations the whole optimum esign theory has been evelope by the authors an several other members of the research group in erlin, an the results have been taken into account in the forthcoming Lecture Notes by the secon author (Schwabe, 995c) incluing a number of new an general results for the univariate case which, together with the present paper, opens a wie el of new applications Also, the main result by Krat an Schaefer (992) appears to be a variation of the reuction theme an can be regare as a particular case of heorem 5 Note that all the theorems of the present paper may be rea as statements on the optimality of an exact esign within the class of all exact esigns an, alternatively, as statements on the optimality of a general esign within the class of all those general esigns More etaile escriptions of these results are omitte because of the limite length of the paper

3 Design Problems for Multivariate Experiments 3 2 Multivariate linear moels uner elementary esigns We consier a moel for multivariate observations X(t) = (X () (t); :::; X (r) (t)) in which the mean response (t) = ( (t); :::; r (t)) can be linearly parametrize for each component % (t) = E(X (%) (t)), % = ; :::; r, in epenence on the setting t 2 for the inuential factors More precisely, X (%) (t) = a % (t) % + Z (%) (t); (4) where a % :! IR p% an % 2 IR p% are the known regression functions resp the unknown parameters for the %th component, an Z(t) = (Z () (t); :::; Z (r) (t)) is the ranom error associate with the observation X(t), E(Z(t)) = Hence, a multivariate observation X(t) is escribe by X(t) = a(t) + Z(t); (5) where 2 IR p, p = P r %= p %, is the vector of all unknown parameters = () (2) (r) A (6) an a(t) is block iagonal a(t) = a (t) a 2 (t) a r (t) A : (7) We may obtain N ierent observations with components X n (t n ) = a(t n ) + Z n (t n ) (8) X (%) n (t n ) = a % (t n ) % + Z (%) n (t n ) (9) at N possibly ierent ajustments t n of the factors of inuence, n = ; :::; N An elementary esign of size N is given as an N-imensional vector (t ; :::; t N ) which contains the ajustments for the N ierent experiments

4 4 Viktor G Kurotschka, Rainer Schwabe With X (%) = (X (%) (t ); :::; X (%) N (t N)) an Z (%) = (Z (%) (t ); :::; Z (%) N (t N )) being the vectors of observations an errors, respectively, the observations for the %th component can be written in the usual matrix notation where X (%) = A (%) % + Z (%) () A (%) = a % (t ) a % (t 2 ) a % (t N ) A () is the esign matrix for the %th component In accorance with the common notations in multivariate analysis we vectorize the moel an arrange its components successively Hence we obtain X = X () X (2) ḍ X (r) A ; Z = where the esign matrix A is block iagonal A = Z () Z (2) ḍ Z (r) A : (2) X = A + Z (3) A () A (2) A : (4) A (r) Note that a vectorization X e = (X (t ) ; :::; X N (t N ) ) woul be more appealing for esign purposes However, ealing with the rearrange vector X e leas to consierably more complicate notations, in particular, in Section 3 Next we assume that the unerlying error structure is homogeneous, i e cov(z n (t n )) =, n = ; :::; N, an that the observations are uncorrelate, i e cov(z m (t m ); Z n (t n )) = for m 6= n his results in a covariance structure cov(z ) = E N for the vectorize errors Z, where E N is the N N ientity matrix an enotes the Kronecker prouct o avoi trivial cases we assume that > is positive enite

5 Design Problems for Multivariate Experiments 5 In the linear moel (3) the general (weighte) least squares estimator b = (A ( E N )? A )? A ( E N )? X (5) is the best linear unbiase estimator of the unknown parameters We will consier only such esigns for which is linearly ientiable (estimable), i e for which A has full column rank he quality of the estimator b base on the esign an, hence, the quality of the esign itself is measure by the covariance matrix cov(b ) = (A ( E N )? A )? : (6) If we enote the entries of? by u %%, %; % = ; :::; r, then the covariance matrix cov(b ) can be calculate more explicitly in terms of the esign matrices associate with the components cov(b ) = u A () A () u 2 A (2) A () u r A (r) A () u 2 A () A (2) u 22 A (2) A (2) u r2 A (r) A (2) u r A () u 2r A (2) u rr A (r) A (r) A (r) A (r) A? : (7) In general, it is not possible to minimize the covariance matrix cov(b ) uniformly, i e in the positive-enite sense As a compromise we have to look at real value functions of cov(b ), like the eterminant, the trace, or the largest eigenvalue, which are isotonic In particular, a esign will be calle D- (Aresp E-) optimum if it minimizes the eterminant (the trace resp the largest eigenvalue of cov(b ) 3 Homogeneous components Aitionally, in the present section, we assume that the mean responses % are moele in the same way for each component, i e the regression functions a % for the components coincie: a % = a, % = ; :::; r Note that still, in general, the unknown parameters % an, hence, the mean responses will be ierent for the single components In this situation also the componentwise esign matrices A (%) coincie, an both the regression function a an the esign matrix A factorize accoring to (cf (7) an (4)) a(t) = E r a (t) (8)

6 6 Viktor G Kurotschka, Rainer Schwabe an A = E r A () : (9) ecause of the equi-moelling of the components we can calculate the general least squares estimator (5) by using (9) as b = (? A () = (E r (A () A () )? (? A () )X A () )? A () )X (2) From (2) we see that the knowlege of is not require for the calculation of b an that the general least squares estimator coincies with the orinary least squares estimator Moreover, the covariance matrix (6) factorizes b ols = (A A )? A X : (2) cov(b ) = (A () A () )? : (22) Accoring to the Kronecker prouct structure (22) the eigenvalues of the covariance matrix cov(b ) factorize, i e let be the vector of eigenvalues of, the vector of eigenvalues of (A () of cov(b ), then () A () )?, an the vector of eigenvalues = () : (23) In particular, we obtain for the eterminant et(cov(b )) = et() p et(a () A () )?r ; (24) for the trace tr(cov(b )) = tr() tr((a () A () )? ) (25) an for the largest eigenvalue max (cov(b )) = max () max ((A () A () )? ): (26) Hence the optimization with respect to the usual D- (A- resp E-) criterion reuces to the corresponing univariate optimization problem heorem If is D- (A- resp E-) optimum in the univariate moel with mean response (t) = a (t), then is D- (A- resp E-) optimum in the multivariate moel with homogeneous components % (t) = a (t) %

7 Design Problems for Multivariate Experiments 7 Remark ecause of (23) the result of heorem can be extene to the whole class of q -criteria base on the eigenvalues of the covariance matrix (for a enition see e g Pazman, 986) Remark 2 Further generalizations are straightforwar if we are intereste in a linear aspect of the unknown parameters which is of the form () = (L L ) where ( ) = L is the corresponing linear aspect in the univariate moel, L 2 IR sp an L 2 IR sr Particular cases are covere by L = E r, if we are intereste in the same linear aspect for each component, an by L with zero row sums, if we are intereste in ierences between the components Finally, we want to mention that in the particular situation of the present section the moel can be written in a more appealing way X = A () + Z ; (27) where X = (X () ; :::; X(r) ), Z = (Z () ; :::; Z(r) ) an = ( ; :::; r ), such that each row of (27) represents an r-imensional observation X n (t n ) However, the form (27) is not convenient for esign consierations because, then, we have to eal with a matrix shape of parameters rather than the vectorize 4 Uncorrelate components If the components of the multivariate observations o not inuence each other then the covariance matrix of a single observation is iagonal = A : (28) r 2 he inuce covariance structure E N of the whole observational vector X is also iagonal E N = E 2 N 2E 2 N A : (29) re 2 N

8 8 Viktor G Kurotschka, Rainer Schwabe Using the block iagonal structure (7) of the esign matrix A we obtain from (7) that the covariance matrix of the general least squares estimator becomes block iagonal cov(b ) = 2(A() A () )? 2 2(A(2) A (2) )? r(a 2 (r) A (r) )? Moreover, the least squares estimator b can be written as b = (A () A () )? A () X () (A (2) A (2) )? A (2) X (2) (A (r) A (r) )? A (r) X (r) A : (3) A : (3) Again the calculation of b oes not require any further knowlege of the iagonal entries 2 % of an the general least squares estimator coincies with the orinary b ols = (A A )? A X Denote by (%) the vector of eigenvalues of (A (%) A (%) )? hen the vector of eigenvalues of cov(b ) can be partitione accoring to = 2 () 2 2 (2) 2 r (r) A : (32) Q r A esign is D-optimum if it minimizes %= et(a(%) A (%) )?, irrespectively of the P actual values of 2; :::; 2 r Alternatively, an A-optimum esign minimizes r %= 2 % tr((a (%) )? ) an an E-optimum esign aims at minimizing A (%) max %=;:::;r % 2 max ((A (%) )? ), respectively In particular, if a esign is simultaneously optimum for all components then it solves also the corresponing multivariate optimization problem A (%) heorem 2 If is D- (A- resp E-) optimum in each univariate moel with mean response % (t) = a % (t) %, then is D- (A- resp E-) optimum in the multivariate moel with uncorrelate components

9 Design Problems for Multivariate Experiments 9 5 omponents with increasing complexity In general situations the unerlying moels may ier for the single components, e g regressions of ierent precisions may be tte, the components may be aecte by ierent sets of factors of inuence, or ierent interaction structures may occur In the setting of increasing complexity we assume that the components can be rearrange in such a way that the regression functions a % are inclue in each other in ascening orer a = f ; a 2 = f f 2 ; ::: ; a r =! f f 2 f r A ; (33) f % :! IR q%, q % = p %? p %?, where some of the f % might be empty (q % = ) Now, for a xe esign, we can reparametrize the moels for each component by orthogonalization with respect to its preecessor hus we ene f e = f, f% e = f %? A ;% A(%?) (A (%?) )? a %?, where A ;% = (f %(t ); :::; f % (t N )), an ea % = (e f ; :::; f e % ) hen the components of the A (%?) corresponing univariate esign matrices A e(%), which are ene by (), are orthogonal, i e ea ;% e A ;% = ; (34) % 6= %, where, again, e A ;% = ( e f% (t ); :::; e f% (t N )) Moreover, the esign matrices are relate by linear transformations A e(%) = L % A (%) an A e = LA, where L % an L are lower triangular matrices with all their iagonal entries equal to one (see Appenix) Next, we rearrange the regression functions in the multivariate moel accoring to the components f% e an, hence, permute the parameters Let a = E r f e E r? f2 e E r?%+ f% e E r f e A ; (35) then a = Qa for some permutation matrix Q he esign matrices A are

10 Viktor G Kurotschka, Rainer Schwabe ene in analogy to (4) A = a (t ) a (t N ) a 2 (t ) a r (t N ) A ; (36) where a % is the %th component of a As et(l) = j et(q)j = we obtain et(cov(b )) = et( e A (? E N ) e A )? = et( A (? E N ) A )? (37) by (6) Denote by U % = (u %)% =%;:::;r % % =%;:::;r that part of the inverse covariance matrix? associate with the components %; :::; r in which the regression functions f % are present, U % = (; E r?%+ )? (; E r?%+ ), U =? hen A (? E N ) A = is block iagonal an, hence, et(cov(b )) = U A e ; e A; U 2 A e ;2A;2 e A U r A e ;ra;r e! ry r! Y et(u % )?q% et( A e ;% A;% e )?(r?%+) %= %= (38) (39) in view of (37) Note that, by convention, et( e A ;% e A;% ) = if q % = y the orthogonality (34) of A;% e an as et(l % ) = we have et(a (%) A (%) ) = et( A e(%) e Q (%) A ) = % % et( e = A ;% e A;% ) so that, nally,! ry r! Y et(cov(b )) = et(u % )?q% et(a (%) A (%) )? : (4) %= Note that formula (4) is the main result obtaine by Krat an Schaefer (992) Now, the eterminant of the covariance matrix is minimize if the )? can be minimize simul- etermiant of the covariance matrices (A (%) taneously for each component %= A (%)

11 Design Problems for Multivariate Experiments As reparametrizations o not aect the criterion of D-optimality the concept of increasing complexity can be paraphrase as a % = % a %+ for some p % p %+ matrix %, % = ; :::; r? heorem 3 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, a % = % a %+, then is D-optimum in the multivariate moel with increasing complexity For moels with increasing complexity, in which a esign exists which is simultaneously D-optimum for all components, particular examples are given by (i) polynomial regression of egrees zero an one, (ii) trigonometric regression of ierent egrees up to egree (N? )=2, an (iii) K-factor moels with ierent epth of interactions (see Schwabe, 995a) 6 omponents with mutually exclusive complexity In the general setting, that the regression functions are not necessarily inclue in each other, further assumptions have to be mae on the optimum esign Essentially, optimality is also require for augmente moels which are forme by the regression functions associate with any pair of components his situation will be exhibite for a bivariate moel, rst In general, a pair of regression functions a an a 2 can be reparametrize in such a way that there is a common regression vector f an there are mutually ierent components f an f 2 such that the set of entries in (f ; f ; f 2 ) is linearly inepenent on, a = f f ; a 2 =! f f 2 : (4)! As in the previous section we orthogonalize f an f 2 with respect to f :! IR p an a xe esign, f% e = f %? A ;% A ;(A ; A ;)? f, % = ; 2, where A ;% = (f % (t ); :::; f % (t N )), % = ; ; 2 Again, the corresponing transformation matrices are lower iagonal with all iagonal elements equal to one Next we rearrange the regression functions similar to (35) a = E 2 f ef e f2 A : (42) We ene A as in (36) an obtain et(cov(b )) = et( A (? E N ) A )?

12 2 Viktor G Kurotschka, Rainer Schwabe (cf (39)) an A (? E N ) A = similar to (38) ombining these results we get? A ; A ; u e A ; e A; u 2 e A ; e A;2 u 2 e A ;2 e A; u 22 e A ;2 e A;2 et(cov(b )) et() p u?(p?p) u?(p2?p) 22 2Y %= A (43) et(a (%) A (%) )? (44) with equality if, an only if, either the components are uncorrelate, u 2 =, or e f an e f2 are orthogonal with respect to the esign, e A ; e A;2 = he latter conition can equivalently be formulate as a factorization for f an f 2 ajuste for f, A ; A ;2 = A ; A ;(A ; A ;)? A ; A ;2 heorem 4 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, a % = (f ; f % ), (f ; f ; f 2 ) linearly inepenent, an if A ; A ;2 = A ; A ;(A ; A ;)? A ; A ;2, then is D-optimum in the bivariate moel with mutual ierent complexity If f is omitte in heorem 4 the factorization conition vanishes an we recover the bivariate version of heorem 3 Aitional moels which are covere by heorem 4 inclue trigonometric regression, in case not all lower orer frequencies are present, an K-factor moels, in which the components are inuence by ierent factors or in which interactions between ierent factors are active For the special case that only the constant term is in common, f =, the factorization conition of heorem 4 simplies to N P N n= f (t n )f 2 (t n ) = ( N P N n= f (t n ))( N P N n= f 2(t n )) Moreover, in this case, the factorization conition implies that is also D-optimum for the augmente univariate moel with mean response ;2 (t) = + f (t) + f 2 (t) 2 Finally, if f is omitte in heorem 4, then the orthogonality of f an f 2, themselves, is require, A ; A ;2 = Also in this case the factorization conition implies the D-optimality of for the corresponing augmente univariate moel with mean response ;2 (t) = f (t) + f 2 (t) 2 For components with simultaneously common regression part f the erivation of heorem 4 can be extene to more than two components in a straightforwar manner to obtain a result as follows orollary If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, a % = (f ; f% ), (f ; f ; :::; fr ) linearly inepen-

13 Design Problems for Multivariate Experiments 3 ent, an if A ;% A ;% = A ;% A ;(A ; A ;)? A ; A ;%, % 6= %, then is D-optimum in the multivariate moel with mutual ierent complexity We note that the factorization conition can be equivalently rewritten as A (%) ) A(% = A(%) A ;(A ; A ;)? A ) ; A(% In the general case factorization conitions are requeste for the regression functions ajuste to their respective common part f %;% o simplify notations we write (a % ) for the set fa % ; :::; a %p% g of all entries (functions) in a %, an (f %;% ) analogously heorem 5 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, an if for every pair % an %, the sets of functions (f %;% ) = (a % ) \ (a % ), (a % ) n (f %;% ) an (a % ) n (f %;% ) are linearly inepenent an the factorization A (%) = A(%) A ;%;% (A ;%;% A ;%;% )? A ) ;%;% A(% hols, then is D-optimum in the multivariate moel with mutual ierent complexity A(% ) he proof can be performe by a more sophisticate rearrangement of the regression functions We mention that, as in the bivariate case, heorem 3 follows from heorem 5 because the factorization conition vanishes for components with increasing complexity 7 Applications Within the classical terrain of experimental esign the so-calle analysis of variance moels, i e experiments inuence by a number of qualitative factors, play an important role For such experiments the most general region of experimental conitions is given by = K k= f; :::; I kg ening K qualitative factors, with the kth one operating at I k possible levels ; :::; I k, k = ; :::; K, an with ierent types an orers of interactions between subsets of the K factors he secon classical approach in experimental esign is concerne with the so-calle regression moels, i e experiments inuence by a number of quantitative factors For such experiments the most general region of experimental conitions is given by a convex subset = of IR K ening K quantitative factors operating at all possible levels t 2 an, again, with ierent types an orers of interactions between subsets of the K factors he specication of these two types of experiments gives rise to a whole class of possible linear parametrizations for univariate linear moels in which both kins of factors, qualitative an quantitative ones, are active an may interact with each other in various kins In the multivariate case each component % of the multivariate response may have, in general, a ierent type of parametrization accoring to the number of factors actually aecting the components an ierent types of in-

14 4 Viktor G Kurotschka, Rainer Schwabe teractions within the classes of factors involve in the iniviual components So the main heorem 5 provies the aequate tool for such realistic cases to solve the esign problem for multivariate experiments by solving the associate esign problems in the setup of the iniviual components o show only part of the huge variety of parametrizations which fall uner the conitions of heorem 5 an its specialization to hierarchical moels (heorem 3) we present the following general classes of examples: Example If ierent factors are aecting ierent sets of factors this can be generally escribe by responses % (t ; :::; t r ) = %; + f % (t % ) %;%, % = ; :::; r, where t = (t ; :::; t r ) 2 = r %= %, an % itself can be a K % -imensional set ening K % factors of inuence o be more specic, in an analysis of variance setting f % may escribe any analysis of variance moel in K % factors operating on % = K% k= f; :::; I %;kg In a regression setting f % may escribe polynomial or trigonometric regression in K % factors operating on a convex subset % of IR K% Example 2 he situation that ierent interaction structures are present for the iniviual components leas for the particular case of two factors of in- uence to a hierarchical moel In the analysis of variance setting there are essentially two types of ierent responses (i; j) = ;i + ;j without interactions an 2 (i; j) = 2;i + 2;j + 2;ij with interactions, respectively For more than two factors more complicate interaction structures may arise, e g in the simplest case responses (i ; i 2 ; i 3 ) = () ;i + (2) ;i 2 + (3) ;i 3 + (;2) ;i i 2 an 2 (i ; i 2 ; i 3 ) = () 2;i + (2) 2;i 2 + (3) 2;i 3 + (;3) 2;i ;i 3 with interactions between the rst an secon resp the rst an thir factor only Example 3 Another typical example for hierarchical moel is obtaine in trigonometric regression, P where the response is moele by a Fourier expansion M % (t) = %; + % m= ( %;2m? sin(2mt) + %;2m cos(2mt)) up to orer M % on t 2 = [; ) In more than one imension also partial interaction structures can be consiere like complete an partial rst an secon orer expansions Also the still larger class of more complex, realistic multivariate experiments where both kins of factors, qualitative an quantitative ones, are simultaneously aecting the response can be treate by heorem 5 using our most recent results for the univariate case, as exhibite in the papers by Kurotschka (984, 988) an Schwabe (995a, 995b) an the forthcoming lecture notes (Schwabe, 995c)

15 Design Problems for Multivariate Experiments 5 8 General esigns he information matrix I() of an elementary esign is ene as the inverse of the corresponing covariance matrix cov(b ) I() = A ( E N )? A : (45) y rearranging terms it can be seen that the information matrix is the sum of the information containe in each single setting I() = NX n= a(t n )? a(t n ) : (46) If there are only few ierent settings an, hence, many replications in an elementary esign it is more convenient to write! t t 2 t J = (47) N N 2 N J where the settings t j are mutually ierent an N j is the number of replications at the setting t j, P J j= N j = N Hence we obtain for the information matrix I() = JX j= N j a(t j )? a(t j ) : (48) As the sample size N is xe, an elementary esign is etermine by its relative frequencies w j = Nj at the actual settings hus may be ientie N with a icrete measure on a nite support, which assigns weights w j to its support points t j o solve the integer optimization problem for N j is often a icult task herefore it is reasonable to embe the optimization problem in a continuous setting an to rop the requirement of w j to be an integer multiple of Hence, N as a general esign we ene any measure on, normalize to one, which is concentrate on a nite number of supporting points In accorance with (48) we ene the information matrix I() = R a(t)? a(t) (t); (49) such that, for any elementary esign of sample size N, I( ) = N I() is a normalize version of the information matrix I()

16 6 Viktor G Kurotschka, Rainer Schwabe esies this traitional extension of the esign problem to general esigns, in orer to embe the optimization problem into a richer set which is convex, a more substantial reason for consiering general esigns is evient ecause the supposition of a prespecie number of units with prespecie variation is sometimes rather articial a more general iea shoul be associate with the concept of general esigns (see Kurotschka, 988, or Schwabe, 995b) Assume that we may observe P at certain esign points t ; ; t J with J intensity proportional to w j >, j= w j =, i e cov(x j (t j )) = w j?, j = ; ; J, then in this general multivariate linear moel the best linear unbiase estimator is given by the weighte least squares estimator b = (A (? W )A )? A (? W )X where X = X () X (2) X (r) A ; (5) X (%) = (X (%) j (t j )) j=;;j, are the vectorize observations, cov(x ) = W?, A (%) A = A () A (2) A ; (5) A (r) = (a % (t ); :::; a % (t J )), is the esign matrix associate with the supporting points t j of the esign an the iagonal matrix W = w w 2 w J is the intensity matrix of the observations We notice that I() = A (? W )A = JX j= A (52) w j a(t j )? a(t j ) ; (53) where cov(b ) = I()? Hence a general esign which assigns weights w j to

17 Design Problems for Multivariate Experiments 7 the supporting points t j, j = ; ; J, has to be unerstoo as an instruction which emans that J ierent observations are to be mae at the esign points t j with intensity proportional to w j, j = ; ; J A general esign is calle D- (A- resp E-) optimum if it minimizes the eterminant (the trace resp the largest eigenvalue of the inverse I()? of the information matrix Note that I( )? coincies with the covariance matrix cov(b ) for an elementary esign with sample size N, up to a multiplicative constant, an hence the generalizations of the criteria are straightforwar N All the results obtaine for elementary xe sample size esigns in the preceing sections are also vali for general esigns an are formulate next heorem If is D- (A- resp E-) optimum in the univariate moel with mean response (t) = a (t), then is D- (A- resp E-) optimum in the multivariate moel with homogeneous components % (t) = a (t) % Note that this simple theorem, restricte to D-optimality, is the main result of hang (994) obtaine by using such emaning tools as the equivalence theorem (see heorem 6 below) heorem 2 If is D- (A- resp E-) optimum in each univariate moel with mean response % (t) = a % (t) %, then is D- (A- resp E-) optimum in the multivariate moel with uncorrelate components heorem 3 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, a % = % a %+, then is D-optimum in the multivariate moel with increasing complexity heorem 4 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, a % = (f ; f % ), (f ; f ; f 2 ) linearly inepenent, an if R f f 2 = R f f ( R f f )? R f f 2, then is D-optimum in the bivariate moel with mutual ierent complexity For the bivariate case we consier the augmente univariate moel with mean response ;2 (t) = f (t) + f (t) + f 2 (t) 2, such that the corresponing regression function a ;2 = (f ; f ; f 2 ) collects all functions occuring in the univariate components If f = or if f is not present in the moel then for the information matrix I ;2 () = R a ;2 a ;2 it is straightforwar by a renement argument that its eterminant is ominate by the information containe in the univariate component moels et(i ;2 ()) et( R e f e f )? et( R e f2 e f 2 )? et( R e f e f )? et( R e f2 e f 2 )? : (54)

18 8 Viktor G Kurotschka, Rainer Schwabe Hence, the factorization conition of heorem 4 implies the D-optimality of in the augmente moel In the next section we will establish the result that the aitional D-optimality of in the augmente univariate moel is, inee, necessary for the D-optimality in the bivariate moel for general covariance structure? heorem 5 If is D-optimum in each univariate moel with mean response % (t) = a % (t) %, an if for every pair % an %, the sets of functions (f %;% ) = (a % ) \ (a % ), (a % ) n (f %;% ) an (a % ) n (f %;% ) are linearly inepenent an the factorization R a % a % = R a % f %;% ( R f %;% f %;% )? R f %;% a % hols, then is D-optimum in the multivariate moel with mutual ierent complexity Finally, we mention that a useful tool for checking a general esign to be D-optimum in a multivariate moel is the following equivalence theorem (see Feorov, 972, p 22) of which we will make use in the subsequent section heorem 6 (i) A esign is D-optimum if, an only if, tr(? a(t)i( )? a(t) ) p, for every t 2 (ii) If is D-optimum, then (tr(? a(t)i( )? a(t) ) = p) = 9 A counterexample an its theoretical backgroun Although the results of the preceing sections seem to inclue all practically relevant general parametrizations for multivariate linear experiments the following example, which is more of mathematical interest, shows that the generally accepte opinion is misleaing that factorization is always possible his is cause by the fact that the concept of orthogonalization, which is a universally useful tool in the esign of experiments, may conict with the meaning of the parameters if applie across the components of a multivariate experiment: o keep notations as simple as possible we consier a bivariate moel with ierent one-parameter univariate components for which the same esign = 2 is optimum In particular, we will eal with a purely linear component, an with a purely quaratic component, X () (t) = t + Z () ; (55) X (2) (t) = 2 t 2 + Z (2) ; (56) on the esign region = [; ], for various? < For >? the unique optimum esign for both univariate moels is given by the one point esign

19 Design Problems for Multivariate Experiments 9 % = " concentrate at the location of the largest moulus of both regression functions t an t 2 on he criterion uner consieration for the bivariate moel will be D- optimality, which is not aecte by scale transformations of the componentwise variances Hence we may assume without loss of generality (c) = c c! (57) for various correlations c,? < c < As % is not D-optimum for the augmente univariate moel ;2 (t) = t + 2 t 2, it is to be expecte that % is not D-optimum for the bivariate moel as well, at least, for large correlations jcj First we compare % with the best two-point esign (; c) = w (; c)" + (?w (; c))" concentrate on the enpoints of the esign region [; ], 6= hat two-point esign ominates % if w(; c) = 2 2 (? ) 2? 2(? 3 )(? % 2 ) 2 (? ) 2? (? 3 ) 2 (? % 2 ) > ; (58) in which case w (; c) = w(; c) Note that w(; c) = w(;?c) Hence, for every there is a critical value c krit () =? 2 2 (? ) 2? 3! =2 (59) for the correlation beyon which (jcj > c krit ()) the esign % is ominate by (; c) Note that (; c) is not necessarily the D-optimum esign for the bivariate moel, although the Equivalence heorem 6 ensures that the optimum esign is concentrate on, at most, two supporting points incluing, at least, for Note also that c krit ()! for!? For the stanar esign region = [; ] the D-optimum esign (c) is supporte by, at most, two points incluing the right enpoint t = For jcj min << c krit () = :98 the one-point esign % is D-optimum for the bivariate moel However, for stronger correlations jcj!, it can be checke that a secon supporting point t 2 is require for the D-optimum esign an that t 2 approaches :5 an the corresponing optimal weight w2 = w (t 2 ; c) tens to Hence 2 (c) converges to the esign () which is D-optimum in the augmente univariate moel, as jcj!

20 2 Viktor G Kurotschka, Rainer Schwabe In general, we consier R R ' c () = et( f f ) 2 et( f e f e ) R et ef2f e R 2? c e 2 f2f e R ( f e f e R ) e? ff e 2 : (6) hen, given (c), a esign (c) is D-optimum for the bivariate moel if, an only if, it maximizes ' c (), jcj < If (c)! () as jcj!, then ' c ( (c))! ' (()), at least, for continuous regression functions on a compact esign region Now, for () which maximizes ' (), we have ' c ( ()) ' c ( (c)) an ' c ( ())! ' ( ()), as jcj! Hence ' ( ()) ' (()) an () is also optimum with repsect to ' hus we can formulate the following result heorem 7 Let (c) be D-optimum in the bivariate moel with mutual ifferent complexity an unerlying covariance structure (c), let a be continuous an be compact hen every limiting esign () of ( (c)), as jcj!, maximizes ' () In the particular case that either f = or (f ) = ; the esign () maximizes ' () if, an only if, () is D-optimum in the augmente univariate moel Moreover, if () is the unique D-optimum esign in the augmente univariate moel, then (c)! () as jcj! Hence, in this case, it is necessary for the simultaneous D-optimality of in the bivariate moel for every covariance structure that is also D-optimum in the augmente univariate moel For the specic example, above, this situation occurs if = an the simultaneously optimum esign assigns equal weights to both enpoints? an 2 of the esign region = [?; ] Note that in this case also the factorization conitions are satise Appenix y `% = A ;% A(%?) (A (%?) )? we enote the componentwise orthogonalization matrix of f % with respect to a %? an hen the transformation matrix A (%?) L % = E q `2 E q2 `3 E q3 `% E q% A (6)

21 Design Problems for Multivariate Experiments 2 for the univariate moel, ea % = L % a %, is lower triangular with ones on the iagonal (L = E q, q = p ) Finally, the transformation matrix L for the multivariate moel, ea = La, is block iagonal L = L L 2 L r an, hence, by (6), lower triangular with all iagonal entries one A (62) REFERENES hang, S I (994) Some properties of multiresponse D-optimal esigns J Math Anal Appl 84, 256{262 Feorov, V V (972) heory of Optimal Experiments Acaemic Press, New York Krat, O an Schaefer, M (992) D-optimal esigns for a multivariate regression moel J Multivariate Anal 42, 3{4 Kurotschka, V G (984) A general approach to optimum esign of experiments with qualitative an quantitative factors In: Statistics: Applications an New Directions: Proceeings of the Inian Statistical Institute Golen Jubilee International onference alcutta 98 (J K Ghosh an J Roy (es)) Inian Statistical Institute, alcutta 353{368 Kurotschka, V G (988) haracterizations an examples of optimal experiments with qualitative an quantitative factors In: Moel-Oriente Data Analysis, Proceeings Eisenach 987 (V Feorov an H Lauter (es)) Springer, erlin 53{7 Line, A van er (977) Versuchsplanung fur multivariate lineare Moelle Diplomarbeit, Freie Universitat erlin, Fachbereich Mathematik 75 pages Pazman, A (986) Founations of Optimum Experimental Design Reiel, Dorrecht Schwabe, R (995a) Experimental esign for linear moels with higher orer interaction terms In: Symposia Gaussiana Proceeings of the 2n Gauss Symposium, Munchen 993 onference : Statistical Sciences (V Mammitzsch an H Schneewei (es)) DeGruyter, erlin 28{288 Schwabe, R (995b) Optimal esigns for aitive linear moels Statistics (in press)

22 22 Viktor G Kurotschka, Rainer Schwabe Schwabe, R (995c) Optimum Designs for Multi-factor Moels Lecture Notes in Statistics Springer, New York (to appear) Wegscheier, K (977) Optimale Versuchsplanung fur multivariate eobachtungsprozesse mit kontinuierlich variierenen Versuchsbeingungen Diplomarbeit, Freie Universitat erlin, Fachbereich Mathematik 282 pages

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

OPTIMAL DESIGNS FOR COMPLETE INTERACTION. By Rainer Schwabe. Freie Universitat Berlin

OPTIMAL DESIGNS FOR COMPLETE INTERACTION. By Rainer Schwabe. Freie Universitat Berlin 1 OPTIMAL DESIGNS FOR COMPLETE INTERACTION STRUCTURES By Rainer Schwabe Freie Universitat Berlin In linear models with complete interaction structures product type designs are shown to be q -optimal for

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS

SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS GEORGE A HAGEDORN AND CAROLINE LASSER Abstract We investigate the iterate Kronecker prouct of a square matrix with itself an prove an invariance

More information

Robustness and Perturbations of Minimal Bases

Robustness and Perturbations of Minimal Bases Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

Optimal CDMA Signatures: A Finite-Step Approach

Optimal CDMA Signatures: A Finite-Step Approach Optimal CDMA Signatures: A Finite-Step Approach Joel A. Tropp Inst. for Comp. Engr. an Sci. (ICES) 1 University Station C000 Austin, TX 7871 jtropp@ices.utexas.eu Inerjit. S. Dhillon Dept. of Comp. Sci.

More information

Connections Between Duality in Control Theory and

Connections Between Duality in Control Theory and Connections Between Duality in Control heory an Convex Optimization V. Balakrishnan 1 an L. Vanenberghe 2 Abstract Several important problems in control theory can be reformulate as convex optimization

More information

Constrained optimal discrimination designs for Fourier regression models

Constrained optimal discrimination designs for Fourier regression models Ann Inst Stat Math (29) 6:43 57 DOI.7/s463-7-33-5 Constraine optimal iscrimination esigns for Fourier regression moels Stefanie Bieermann Holger Dette Philipp Hoffmann Receive: 26 June 26 / Revise: 9 March

More information

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations Characterizing Real-Value Multivariate Complex Polynomials an Their Symmetric Tensor Representations Bo JIANG Zhening LI Shuzhong ZHANG December 31, 2014 Abstract In this paper we stuy multivariate polynomial

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Improving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers

Improving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers International Journal of Statistics an Probability; Vol 6, No 5; September 207 ISSN 927-7032 E-ISSN 927-7040 Publishe by Canaian Center of Science an Eucation Improving Estimation Accuracy in Nonranomize

More information

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Capacity Analysis of MIMO Systems with Unknown Channel State Information Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,

More information

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009 Spurious Significance of reatment Effects in Overfitte Fixe Effect Moels Albrecht Ritschl LSE an CEPR March 2009 Introuction Evaluating subsample means across groups an time perios is common in panel stuies

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1997-11 Accelerate Implementation of Forwaring Control Laws using Composition Methos 1 Yves Moreau, Roolphe Sepulchre, Joos Vanewalle

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Systems & Control Letters

Systems & Control Letters Systems & ontrol Letters ( ) ontents lists available at ScienceDirect Systems & ontrol Letters journal homepage: www.elsevier.com/locate/sysconle A converse to the eterministic separation principle Jochen

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Advanced Partial Differential Equations with Applications

Advanced Partial Differential Equations with Applications MIT OpenCourseWare http://ocw.mit.eu 18.306 Avance Partial Differential Equations with Applications Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.eu/terms.

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING Mark A. Kon Department of Mathematics an Statistics Boston University Boston, MA 02215 email: mkon@bu.eu Anrzej Przybyszewski

More information

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers Optimal Variable-Structure Control racking of Spacecraft Maneuvers John L. Crassiis 1 Srinivas R. Vaali F. Lanis Markley 3 Introuction In recent years, much effort has been evote to the close-loop esign

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll

More information

Optimal Measurement and Control in Quantum Dynamical Systems.

Optimal Measurement and Control in Quantum Dynamical Systems. Optimal Measurement an Control in Quantum Dynamical Systems. V P Belavin Institute of Physics, Copernicus University, Polan. (On leave of absence from MIEM, Moscow, USSR) Preprint No 411, Torun, February

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

How to Minimize Maximum Regret in Repeated Decision-Making

How to Minimize Maximum Regret in Repeated Decision-Making How to Minimize Maximum Regret in Repeate Decision-Making Karl H. Schlag July 3 2003 Economics Department, European University Institute, Via ella Piazzuola 43, 033 Florence, Italy, Tel: 0039-0-4689, email:

More information

Charge { Vortex Duality. in Double-Layered Josephson Junction Arrays

Charge { Vortex Duality. in Double-Layered Josephson Junction Arrays Charge { Vortex Duality in Double-Layere Josephson Junction Arrays Ya. M. Blanter a;b an Ger Schon c a Institut fur Theorie er Konensierten Materie, Universitat Karlsruhe, 76 Karlsruhe, Germany b Department

More information

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Switching Time Optimization in Discretized Hybrid Dynamical Systems Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

A Note on Modular Partitions and Necklaces

A Note on Modular Partitions and Necklaces A Note on Moular Partitions an Neclaces N. J. A. Sloane, Rutgers University an The OEIS Founation Inc. South Aelaie Avenue, Highlan Par, NJ 08904, USA. Email: njasloane@gmail.com May 6, 204 Abstract Following

More information

Physics 251 Results for Matrix Exponentials Spring 2017

Physics 251 Results for Matrix Exponentials Spring 2017 Physics 25 Results for Matrix Exponentials Spring 27. Properties of the Matrix Exponential Let A be a real or complex n n matrix. The exponential of A is efine via its Taylor series, e A A n = I + n!,

More information

Linear Algebra- Review And Beyond. Lecture 3

Linear Algebra- Review And Beyond. Lecture 3 Linear Algebra- Review An Beyon Lecture 3 This lecture gives a wie range of materials relate to matrix. Matrix is the core of linear algebra, an it s useful in many other fiels. 1 Matrix Matrix is the

More information

A representation theory for a class of vector autoregressive models for fractional processes

A representation theory for a class of vector autoregressive models for fractional processes A representation theory for a class of vector autoregressive moels for fractional processes Søren Johansen Department of Applie Mathematics an Statistics, University of Copenhagen November 2006 Abstract

More information

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas Resistant Polynomials an Stronger Lower Bouns for Depth-Three Arithmetical Formulas Maurice J. Jansen University at Buffalo Kenneth W.Regan University at Buffalo Abstract We erive quaratic lower bouns

More information

1 Introuction In the past few years there has been renewe interest in the nerson impurity moel. This moel was originally propose by nerson [2], for a

1 Introuction In the past few years there has been renewe interest in the nerson impurity moel. This moel was originally propose by nerson [2], for a Theory of the nerson impurity moel: The Schrieer{Wol transformation re{examine Stefan K. Kehrein 1 an nreas Mielke 2 Institut fur Theoretische Physik, uprecht{karls{universitat, D{69120 Heielberg, F..

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

On conditional moments of high-dimensional random vectors given lower-dimensional projections

On conditional moments of high-dimensional random vectors given lower-dimensional projections Submitte to the Bernoulli arxiv:1405.2183v2 [math.st] 6 Sep 2016 On conitional moments of high-imensional ranom vectors given lower-imensional projections LUKAS STEINBERGER an HANNES LEEB Department of

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

conrm that at least the chiral eterminant can be ene on the lattice using the overlap formalism. The overlap formalism has been applie by a number of

conrm that at least the chiral eterminant can be ene on the lattice using the overlap formalism. The overlap formalism has been applie by a number of The Chiral Dirac Determinant Accoring to the Overlap Formalism Per Ernstrom an Ansar Fayyazuin NORDITA, Blegamsvej 7, DK-00 Copenhagen, Denmark Abstract The chiral Dirac eterminant is calculate using the

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

Influence of weight initialization on multilayer perceptron performance

Influence of weight initialization on multilayer perceptron performance Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -

More information

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy IPA Derivatives for Make-to-Stock Prouction-Inventory Systems With Backorers Uner the (Rr) Policy Yihong Fan a Benamin Melame b Yao Zhao c Yorai Wari Abstract This paper aresses Infinitesimal Perturbation

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Thermal runaway during blocking

Thermal runaway during blocking Thermal runaway uring blocking CES_stable CES ICES_stable ICES k 6.5 ma 13 6. 12 5.5 11 5. 1 4.5 9 4. 8 3.5 7 3. 6 2.5 5 2. 4 1.5 3 1. 2.5 1. 6 12 18 24 3 36 s Thermal runaway uring blocking Application

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

2Algebraic ONLINE PAGE PROOFS. foundations

2Algebraic ONLINE PAGE PROOFS. foundations Algebraic founations. Kick off with CAS. Algebraic skills.3 Pascal s triangle an binomial expansions.4 The binomial theorem.5 Sets of real numbers.6 Surs.7 Review . Kick off with CAS Playing lotto Using

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Fast image compression using matrix K-L transform

Fast image compression using matrix K-L transform Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.

More information

Gaussian processes with monotonicity information

Gaussian processes with monotonicity information Gaussian processes with monotonicity information Anonymous Author Anonymous Author Unknown Institution Unknown Institution Abstract A metho for using monotonicity information in multivariate Gaussian process

More information

The Impact of Collusion on the Price of Anarchy in Nonatomic and Discrete Network Games

The Impact of Collusion on the Price of Anarchy in Nonatomic and Discrete Network Games The Impact of Collusion on the Price of Anarchy in Nonatomic an Discrete Network Games Tobias Harks Institute of Mathematics, Technical University Berlin, Germany harks@math.tu-berlin.e Abstract. Hayrapetyan,

More information

Homework 3 - Solutions

Homework 3 - Solutions Homework 3 - Solutions The Transpose an Partial Transpose. 1 Let { 1, 2,, } be an orthonormal basis for C. The transpose map efine with respect to this basis is a superoperator Γ that acts on an operator

More information

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION UNIFYING AND MULISCALE APPROACHES O FAUL DEECION AND ISOLAION Seongkyu Yoon an John F. MacGregor Dept. Chemical Engineering, McMaster University, Hamilton Ontario Canaa L8S 4L7 yoons@mcmaster.ca macgreg@mcmaster.ca

More information

Web-Based Technical Appendix: Multi-Product Firms and Trade Liberalization

Web-Based Technical Appendix: Multi-Product Firms and Trade Liberalization Web-Base Technical Appeni: Multi-Prouct Firms an Trae Liberalization Anrew B. Bernar Tuck School of Business at Dartmouth & NBER Stephen J. Reing LSE, Yale School of Management & CEPR Peter K. Schott Yale

More information

d-dimensional Arrangement Revisited

d-dimensional Arrangement Revisited -Dimensional Arrangement Revisite Daniel Rotter Jens Vygen Research Institute for Discrete Mathematics University of Bonn Revise version: April 5, 013 Abstract We revisit the -imensional arrangement problem

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Lecture 6: Generalized multivariate analysis of variance

Lecture 6: Generalized multivariate analysis of variance Lecture 6: Generalize multivariate analysis of variance Measuring association of the entire microbiome with other variables Distance matrices capture some aspects of the ata (e.g. microbiome composition,

More information

A variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs

A variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs Mathias Fuchs, Norbert Krautenbacher A variance ecomposition an a Central Limit Theorem for empirical losses associate with resampling esigns Technical Report Number 173, 2014 Department of Statistics

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

A. Exclusive KL View of the MLE

A. Exclusive KL View of the MLE A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

Shifted Independent Component Analysis

Shifted Independent Component Analysis Downloae rom orbit.tu.k on: Dec 06, 2017 Shite Inepenent Component Analysis Mørup, Morten; Masen, Kristoer Hougaar; Hansen, Lars Kai Publishe in: 7th International Conerence on Inepenent Component Analysis

More information

Neuro-Fuzzy Processor

Neuro-Fuzzy Processor An Introuction to Fuzzy State Automata L.M. Reyneri Dipartimento i Elettronica - Politecnico i Torino C.so Duca Abruzzi, 24-10129 Torino - ITALY e.mail reyneri@polito.it; phone ++39 11 568 4038; fax ++39

More information

Closed and Open Loop Optimal Control of Buffer and Energy of a Wireless Device

Closed and Open Loop Optimal Control of Buffer and Energy of a Wireless Device Close an Open Loop Optimal Control of Buffer an Energy of a Wireless Device V. S. Borkar School of Technology an Computer Science TIFR, umbai, Inia. borkar@tifr.res.in A. A. Kherani B. J. Prabhu INRIA

More information

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract Pseuo-Time Methos for Constraine Optimization Problems Governe by PDE Shlomo Ta'asan Carnegie Mellon University an Institute for Computer Applications in Science an Engineering Abstract In this paper we

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX J. WILLIAM HELTON, JEAN B. LASSERRE, AND MIHAI PUTINAR Abstract. We investigate an iscuss when the inverse of a multivariate truncate moment matrix

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency Transmission Line Matrix (TLM network analogues of reversible trapping processes Part B: scaling an consistency Donar e Cogan * ANC Eucation, 308-310.A. De Mel Mawatha, Colombo 3, Sri Lanka * onarecogan@gmail.com

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information