Distance optimality design criterion in linear models
|
|
- Justina Webb
- 5 years ago
- Views:
Transcription
1 Metrika (1999) 49: 193±211 > Springer-Verlag 1999 Distance optimality design criterion in linear models Erkki P. Liski1, Arto Luoma1, Alexander Zaigraev2 1 Department of Mathematics, Statistics and Philosophy, University of Tampere, P.O. Box 607, FIN Tampere, Finland ( epl@uta. ; al18853@uta. ) 2 Faculty of Mathematics and Informatics, Nicholas Copernicus University, Chopin str. 12/18, PL Torun, Poland ( alzaig@mat.uni.tosun.pl) Received: June 1999 Abstract. Properties of the most familiar optimality criteria, for example A-, D- and E-optimality, are well known, but the distance optimality criterion has not drawn much attention to date. In this paper properties of the distance optimality criterion for the parameter vector of the classical linear model under normally distributed errors are investigated. DS-optimal designs are derived for rst-order polynomial t models. The matter of how the distance optimality criterion is related to traditional D- and E-optimality criteria is also addressed. Key words: Schur-concavity, polynomial designs, D- and E-optimality, isotonicity, admissability 1 Introduction There exists an extensive literature on the characterization of optimal designs under both discrete and continuous settings using the most familiar optimality criteria, for example A-, D- and E-optimality. For references see for example the monograph by Shah and Sinha (1989) and the book by Pukelsheim (1993). However, the distance optimality criterion, though brie y mentioned in the monograph by Shah and Sinha, has not drawn much attention hitherto. Let us rst consider a line t model through the origin. Suppose we have n uncorrelated responses Y ij ˆ x i b E ij ; 1:1 i ˆ 1; 2;... ; l, and j ˆ 1; 2;... ; n i with expectations and variances E Y ij Š ˆ x i b and V Y ij Š ˆ s 2 ;
2 194 E. P. Liski et al. respectively. The least squares estimator (LSE) of b ^b ˆ Xl X l n i x i Y i n i xi 2 ; 1:2 where Y i ˆ Y i1 Y i2 Y ini =n i, has minimum variance among all linear unbiased estimators of b. The variance V ^bš P ˆ s 2 l n i xi 2 depends on the values x 1 ; x 2 ;... ; x l of the regressor x and the numbers of replications n 1 ; n 2 ;... ; n l. The values x i are chosen from a given regression range w, which is usually an interval a; bš. An experimental design x n, or simply x, for sample size n is given by a nite number of regression values in w, and nonzero integers n 1 ; n 2 ;... ; n l such that P l n i ˆ n. The LSE ^b of b and its variance V ^bš depend on x, that is we may write ^b ˆ ^b x. We search now for a design x which maximizes the probability P j ^b x bj U e : The idea is to minimize the distance between the true parameter value and its estimate in a stochastic sense. Hence we call this criterion DS-optimality criterion. A design x is DS-optimal for the LSE of b when P j ^b x bj U e V P j ^b x bj U e for all e V 0 1:3 and for any competing design x with a xed sample size n. Let wm 2 denote a chi-squared random variable with m degrees of freedom. Suppose that observations Y ij in the model (1.1) follow a normal distribution. Then ^b b 2 =V w1 2, and hence! P j ^b bj U e ˆ P ^b b 2 U e 2 ˆ P w1 2 U e2 X l s 2 n i xi 2 : 1:4 Let w ˆ a; bš be the regression range and let x p ˆ fa; b; pg with 0 U p U 1 denote a design that assigns the weights p and 1 p to the regression values b and a, respectively. If jbj > jaj, then the unique maximum of the probability (1.4) is P w1 2 U e2 nb 2 s 2. Thus x 1 ˆ fa; b; 1g is the unique DS-optimal design. Similarly, if jbj < jaj, then x 0 ˆ fa; b; 0g is the unique DS-optimal design. Finally, if w ˆ a; aš, then every design x p ˆ f a; a; pg with any 0 U p U 1 is DS-optimal. In fact, under the model (1.1) with normally distributed errors the statements (i) P j ^b x bj U e V P j ^b x bj U e for all e > 0, (ii) P j ^b x bj U e V P j ^b x bj U e for some e > 0, (iii) V ^b x Š U V ^b x Š are equivalent (cf. SteËpniak 1989).
3 Distance optimality design criterion in linear models 195 Note that the optimality criterion (1.3) is de ned via the peakedness of the distributions of ^b x and ^b x. According to a de nition proposed by Birnbaum (1948), a random variable Y 1 is more peaked about m 1 than is a random variable Y 2 about m 2 if P jy 1 m 1 j U e V P jy 2 m 2 j U e for all e V 0: When m 1 ˆ m 2 ˆ 0, we simply say that Y 1 is more peaked than Y 2. This de nition was generalized to the multivariate case by Sherman (1955). For k-dimensional random vectors Y 1 and Y 2, Y 1 is said to be more peaked than Y 2 if PfY 1 A Ag V PfY 2 A Ag 1:5 holds for all convex and symmetric (about the origin) sets A H R k. In Section 2 we characterize the DS(e)- and DS-optimality for designs under the classical linear model. Section 3 deals mainly with majorization properties of the DS-optimality criterion. In Section 4 we brie y consider symmetric polynomial designs and derive DS-optimal designs for the mean parameter of the m-way rst-order polynomial t model. Finally, in Section 5, we discuss the behaviour of the DS(e)-criterion as e! 0 and as e! y. 2 DS-Optimality in linear models In this paper, we consider distance optimality under the classical linear model N n Xb; s 2 I n ; 2:1 where the n 1 response vector Y ˆ Y 1 ; Y 2 ;... ; Y n 0 follows a multivariate normal distribution, X ˆ x 1 ; x 2 ;... ; x n 0 is the n k model matrix, b ˆ b 1 ; b 2 ;... ; b k 0 is the k 1 parameter vector, E YŠ ˆ Xb is the expectation vector of Y and D YŠ ˆ s 2 I n is the dispersion matrix of Y, where s 2 ˆ V Y i Š and I n is the n n identity matrix. The regression vector x i appears as the ith row of the model matrix X. In the spirit of (1.3) we de ne now a DS(e)-optimality criterion. De nition 2.1. Let ^b 1 ˆ ^b x 1 and ^b 2 ˆ ^b x 2 be the LSE's of b in (2.1) under the designs x 1 and x 2, respectively, and k k denotes the Euclidean norm in R k. If for a given e > 0 P k ^b 1 bk U e V P k ^b 2 bk U e ; 2:2 then the design x 1 is at least as good as x 2 with respect to the DS(e)-criterion. A design x is said to be DS(e)-optimal for the LSE of b in the model (2.1) if it maximizes the probability P k ^b bk U e. When x is DS(e)-optimal for all e > 0, we say that x is DS-optimal. In the particular case k ˆ 1 the DScriterion coincides with the DS(e)-criterion for any given e > 0. Sinha (1970) introduced the distance optimality criterion in a one-way ANOVA model for
4 196 E. P. Liski et al. optimal allocation of observations with a given total. Note that according to the usual de nition of stochastic ordering for random variables (see Marshall and Olkin 1979, p. 481), k ^b 1 bk is stochastically not greater k ^b 2 bk, if the inequality (2.2) holds for all e > 0. An experimental design x n speci es l U n distinct regression vectors x 1 ; x 2 ;... ; x l and assigns to them frequencies n i such that Pl n i ˆ n. The regression vectors appearing in the design are called the support of x n, that is supp x n ˆ fx 1 ; x 2 ;... ; x l g. The matrix X 0 X ˆ Pl n i x i xi 0 is the moment matrix of x n ; it is denoted by M x n. The dispersion matrix of the LSE of b ˆ b 1 ; b 2 ;... ; b k 0 is s 2 X 0 X 1 and its inverse is the precision matrix of b. This matrix may be written as 1 s 2 M x n ˆ n X l s 2 n i n x ixi 0 ˆ n s 2 M x n =n ; where M x n =n is the averaged moment matrix of x n (see Pukelsheim 1993, p. 25). Designs for a speci ed number of trials are called exact. In practice all designs are exact. More generally, we may allow the weights (or proportions) of the regression vectors vary continuously in the interval 0; 1Š. A design in which the distribution of trials over w is speci ed by a measure x, regardless of n, is called continuous or approximate. The moment matrix of a continuous design x is de ned by M x ˆ X x x i x i xi 0 ; x i A supp x where x x i denotes the weight of the regression vector x i (see Pukelsheim 1993, p. 26). The mathematical problem of nding the optimum design is simpli ed by considering only continuous designs, thus ignoring the constraint that the number of trials at any design point must be an integer. Let PLP 0 ˆ M x be the spectral decomposition of M x, where P is an orthogonal k k-matrix and L ˆ diag l 1 ; l 2 ;... ; l k is the diagonal matrix of the eigenvalues p of M arranged in decreasing order l 1 V l 2 V V l k > 0. n De ne Z ˆ s L1=2 P 0 ^b b and note that N k 0; I k and D ^bš ˆ s 2 n PL 1 P 0. Since the observations Y follow a normal distribution N n Xb; s 2 I n, the LSE of b ˆ b 1 ; b 2 ;... ; b k 0 follows the k-variate normal distribution N k b; s2 n M x 1 : 2:3 As! P k ^b bk 2 U e 2 ˆ P s2 n Z0 L 1 Z U e 2 ˆ P Xk Zi 2 U d 2 l i
5 Distance optimality design criterion in linear models 197 p for all d ˆ n e=s > 0, the DS(e)-criterion P k ^b bk U e depends on M only through its eigenvalues l ˆ l 1 ; l 2 ;... ; l k 0. We de ne the criterion function c e, or equivalently c d, as c e M ˆ P k ^b bk 2 U e 2 and c d l ˆ P Xk! Zi 2 U d 2 l i : 2:4 p It is clear that c e M ˆ c d l for d ˆ n e=s > 0. As a function of d 2 the DS(e)-optimality criterion c d l is the cumulative distribution function of Pk Zi 2=l i for every xed l A R k. A design x is DS e - optimal for the LSE of b in (2.1), if for a given e > 0, c e M x Š V c e M x Š for all x. A design x is DS-optimal if it is DS(e)-optimal for all e > 0. 3 Properties of the DS-optimality criterion The DS(e)-optimality criterion c e M is a function from the set of k k positive de nite matrices into the interval 0; 1Š. Equally well we may consider the corresponding function from positive eigenvalues of M into 0; 1Š: c d l : R k! 0; 1Š: It follows directly from (2.4) that c e am ˆ cp a e M and c d al ˆ cp a d l for all a > 0. An essential aspect of the optimality criterion c e for given e > 0 is that it induces an ordering among designs and among the corresponding moment matrices of designs. We say that a design x 1 is at least as good as x 2, relative to the criterion c e, if c e M x 1 Š V c e M x 2 Š. In this case we can also say that the corresponding moment matrix M x 1 is at least as good as M x 2 with respect to c e. The Loewner partial ordering M x 1 V M x 2 in the set of k k nonnegative de nite matrices is de ned by the relation M x 1 V M x 2, M x 1 M x 2 nonnegative definite (see Pukelsheim 1993, p. 12). 3.1 Isotonicity and admissability The function c e conforms to the Loewner ordering in the sense that it preserves the matrix ordering, i.e. c e is isotonic for all e > 0. Theorem 3.1. The DS-criterion is isotonic relative to Loewner ordering, that is M x 1 V M x 2 > 0 ) c e M x 1 Š V c e M x 2 Š for all e > 0: 3:1
6 198 E. P. Liski et al. Proof. Let M i ˆ M x i, i ˆ 1; 2, be the moment matrices of designs x 1 and x 2, respectively. Then for the designs x 1 and x 2 under the model (2.1) the LSE's ^b N k b; s 2 M 1 i, i ˆ 1; 2. If M 1 V M 2 > 0, then the antitonicity of matrix inversion yields M 1 2 V M 1 1 > 0 (see Pukelsheim 1993, p. 13). Hence by Anderson's theorem (Anderson 1955, cf. Tong 1990, p. 73) we have c e M 1 ˆ P k ^b 1 bk U e V P k ^b 2 bk U e ˆ c e M 2 for all e > 0. r A reasonable weakest requirement for a moment matrix M is that there be no competing moment matrix A which is better than M in the Loewner ordering sense. We say that a moment matrix M is admissible when every competing moment matrix A with A V M is actually equal to M (cf. Pukelsheim 1993, Chapter 10). A design x is rated admissible when its moment matrix M x is admissible. The admissible designs form a complete class ( Pukelsheim 1993, Lemma 10.3). Thus every inadmissible moment matrix may be improved. If M is inadmissible, then there exists an admissible moment matrix A 0 M such that A V M. Since c e is isotonic relative to Loewner ordering, DS(e)-optimal designs as well as DS-optimal ones can be found in the set of admissible designs. 3.2 Schur-concavity of the DS-optimality criterion The notion of majorization proves useful in a study of the function c d l. Majorization concerns the diversity of the components of a vector (cf. Marshall and Olkin 1979, p. 7). Let l ˆ l 1 ; l 2 ;... ; l k 0 and g ˆ g 1 ; g 2 ;... ; g k 0 be k 1 vectors and l 1Š V l 2Š V V l kš, g 1Š V g 2Š V V g kš be their ordered components. De nition 3.1. A vector l is said to majorize g, written l g, if Pm l iš V Pm holds for all m ˆ 1; 2;... ; k 1 and Pk l i ˆ Pk g i. g iš Majorization provides a partial ordering on R k. The order l g implies that the elements of l are more diverse than the elements of g. Then, for example, l l ˆ l; l;... ; l 0 for all l A R k ; P k where l ˆ 1 l i. Functions which reverse the ordering of majorization k are said to be Schur-concave (cf. Marshall and Olkin 1979, p. 54). De nition 3.2. A function f x : R k! R is said to be a Schur-concave function if for all x; y A R k the relation x y implies f x U f y.
7 Distance optimality design criterion in linear models 199 Thus the value of f x becomes greater when the components of x become less diverse. For further details on Schur-concave functions, see Marshall and Olkin (1979). We say that DS(e)-criterion is Schur-concave if the function c d l de ned by the equation (2.4) is a Schur-concave function of l ˆ l 1 ; l 2 ;... ; l k 0. Theorem 3.2. The DS(e)-criterion is Schur-concave for all e > 0. Proof. Consider the function c d l de ned in (2.4). Since the joint density function of the components of Z is Schur-concave ( Tong 1990, Theorem 4.4.1), then by Proposition of Tong (1990, p. 163) c d l ˆ P Xk! Z 2 i p U d 2 l i is a Schur-concave function of l ˆ l 1 ; l 2 ;... ; l k 0 for all d > 0. This proves the theorem. r Since l l ˆ l; l;... ; l 0 for all l, Theorem 3.2 implies the following corollary (cf. also Tong 1990, Theorem 7.4.2): Corollary 3.1. For the function c d l de ned by the equation (2.4) the inequality c d l U c d l holds for all l A R k and all d > 0, where l ˆ l; l;... ; l 0 and l ˆ 1 X k l i. k Corollary 3.2. Let l and g denote the column vectors whose components are the eigenvalues of M 1 and M 2, respectively, arranged in decreasing order. If M is a moment matrix with a vector of eigenvalues 1 a l ag, then c e 1 a M 1 am 2 Š V c e M 3:2 for all a A 0; 1Š and all e > 0. Proof. Let l 1 a M 1 am 2 Š denote the column vector of eigenvalues of 1 a M 1 am 2 arranged in decreasing order. Since by Theorem G.1. (Marshall and Olkin 1979, p. 241) 1 a l ag l 1 a M 1 am 2 Š, the inequality (3.2) follows from Theorem 3.2. r Note especially that l g l M 1 M 2. Therefore c e M 1 M 2 V c e M, where l g is the vector of eigenvalues of M. The following theorem is, in fact, one version of the result due to Okamoto. The proof can be found in Marshall and Olkin (1979, p. 303). Theorem 3.3. The function c d l is a Schur-concave function of log l 1 ; log l 2 ;... ; log l k 0 for all d > 0.
8 200 E. P. Liski et al. The following corollary is a direct consequence of Theorem 3.3. Corollary 3.3. The inequality c d l U c d ~ l holds for all l A R k and all d > 0, where ~ l ˆ ~ l; ~ l;... ; ~ l 0 and ~ l ˆ Qk l 1=k i. 3.3 Concavity The function c d is concave on R k if c d 1 a l 1 al 2 Š V 1 a c d l 1 ac d l 2 for all a A 0; 1 and all l 1, l 2 A R k. Concavity is often regarded as a compelling property of an optimality criterion (cf. Pukelsheim 1993, p. 115). In this section we show that the DS(e)-optimality criterion is not, in general, concave. Theorem 3.4. The function c d l is concave on R k for every xed d > 0 if and only if k U 2. Proof. Let us rst consider the special case l 1 ˆ l 2 ˆ ˆ l k and let c denote the joint value of l i 's. Then c d c ˆ P wk 2 U cd2 is a concave function of c > 0 if and only if for a given d > 0 the second derivative c 00 d c U 0 for all c > 0 (see Marshall and Olkin 1979, 16B.3.c., p. 448). It can be shown by direct calculation that c 00 k 2 d c U 0 if c V d 2 and c 00 k 2 d c > 0 if c < d 2 (cf. also Lemma 5.1). Consequently, for k V 3 the function c d is not a concave function of l everywhere on R k. However, c d is concave on R, i.e. when k ˆ 1. Also we see that c d is concave on the line l 1 ˆ l 2 in R 2. Next we show that c d is concave on the whole of R 2. Let us now assume that k ˆ 2. We show that c d l ˆ 1 2p z 2 1 l1 z2 2 l2 Ud 2 e 1=2 z2 1 z2 2 dz 1 dz 2 is a concave function of l ˆ l 1 ; l 2 0 obtain the representation c d l ˆ 1 2p ˆ 1 2p 2p 0 2p 0 df d p g l 1 ; l 2 ; f re 1=2 r2 dr 0 on R 2. Using polar coordinates we 1 e 1=2 d2 g l 1 ; l 2 ; f Š df; 3:3
9 Distance optimality design criterion in linear models 201! 1 where g l 1 ; l 2 ; f ˆ cos2 f sin2 f. It can be shown by di erentiation that the Hessian matrix H l; f ˆ q2 g l 1 l 2 qlql 0 of g l; f is nonnegative de nite on R 2 for all f A 0; 2pŠ. Hence g l; f is a concave function of l ˆ l 1 ; l 2 0 on R 2 for all f A 0; 2pŠ (Marshall and Olkin 1979, p. 448). The second order derivative of the function f d l; f ˆ 1 e 1=2 d2 g l; f with respect to l is " q 2 f d qlql 0 ˆ d2 g q 2 # g d2 qg qg 0 2 e 1=2 d2 0 : 3:4 qlql 2 ql ql Since it is nonpositive de nite, f d l; f is a concave function of l on R 2 for all f A 0; 2pŠ. Therefore, c d l ˆ 1 2p f 2p d l; f df is also concave on R 2. r 0 Note that for any given e > 0, d 2 ˆ ne 2 =s 2! y as n! y. As shown in the proof of Theorem 3.4, c d c is a concave function of c ˆ l 1 ˆ ˆ l k in k 2 k 2 d 2 ; y. Clearly, for any given e > 0 and k > 2, d 2 ; y! R as n! y. It can be shown in general that for any e > 0 and k A N, c d l is concave on the convex set A d ˆ l A R k : l i A k 2 d 2 ; y ; 1 U i U k. This set approaches R k as n! y. 4 DS-optimality in polynomial t models We posit now a polynomial t model of degree d V 1, where Y ij ˆ b 0 b 1 t i b d t d i E ij ; 4:1 E E ij Š ˆ 0 and V E ij Š ˆ s 2 for i ˆ 1; 2;... ; l and j ˆ 1; 2;... ; n i. The responses Y ij are uncorrelated and the experimental conditions t 1 ; t 2 ;... ; t l are assumed to lie in the interval T ˆ 1; 1Š, which is called the experimental domain. The corresponding regression range w ˆ f 1; t;... ; t d 0 : t A Tg is a one-dimensional curve embedded in R d 1. Any collection t ˆ ft 1 ; t 2 ;... ; t l ; p 1 ; p 2 ;... ; p l g of l V 1 distinct points t i A T and positive numbers p i, i ˆ 1; 2;... ; l, such that P l p i ˆ 1, induces a continuous design x on the regression range w (cf. Pukelsheim 1993, p. 32). In what follows we will also call t a design and denote by T the set of all such designs.
10 202 E. P. Liski et al. 4.1 Symmetric polynomial designs First we show that the function c e is invariant with respect to the re ection operation. Let t A T be a design for the LSE of b ˆ b 0 ; b 1 ;... ; b d 0 on T ˆ 1; 1Š in the polynomial t model (4.1). The re ected design t R is given by t R ˆ f t 1 ; t 2 ;... ; t l ; p 1 ; p 2 ;... ; p l g. The designs t and t R have the same even moments, while the odd moments of t R have a reversed sign. If M d t denotes the d 1 d 1 moment matrix of t, then M d t R ˆ QM d t Q; where Q ˆ diag 1; 1; 1; 1;... ; G1 is a diagonal matrix with diagonal elements 1; 1; 1; 1;... ; G1. Let M d ˆ PLP 0 be the spectral decomposition of M d, where the diagonal elements of the diagonal matrix L are the eigenvalues of M and P is an orthogonal matrix. It is easy to see that QM d Q ˆ QPLP 0 Q is the spectral decomposition of QM d Q. Consequently, M d and QM d Q have the same eigenvalues. Since the value of c e M d depends on M d only through its eigenvalues, c e is invariant under the action of Q. Thus c e M d ˆ c e QM d Q for all moment matrices M d and for all e > 0. Theorem 4.1. Let t be a design and t R be the corresponding re ected design in a dth degree polynomial t model with experimental domain T ˆ 1; 1Š. Then c e 1 a M d t am d t R Š V c e M d t Š for all a A 0; 1Š and all e > 0. Proof. Let l denote the vector of eigenvalues of M d t. Invariance under the action of Q implies that the moment matrix M d t R has the same eigenvalues as M d t, and the desired result follows immediately from Corollary 3.2. r The symmetrized design t ˆ 1 2 t tr has the moment matrix M d t ˆ 12 M d t QM d t QŠ; where all odd moments are zero. Hence the averaging operation simpli es the moment matrices by letting all odd moments vanish. By Theorem 4.1 the symmetrized design t is at least as good as t with respect to the criterion c e. Symmetrization thus increases the value of c e, or at least maintains the same value. Pukelsheim's (1993) Claim 10.7 states that t is admissible if and only if t has at most d 1 support points in the open interval 1; 1. Thus the d 1 -point designs t ˆ f 1; t 2 ;... ; t d ; 1; p 1 ; p 2 ;... ; p d ; p d 1 g with t 2 ; t 3 ;... ; t d A 1; 1 and Pd 1 p i ˆ 1 are admissible. However, as noted above, the corresponding symmetrized designs t are at least as good as t.
11 Distance optimality design criterion in linear models First degree polynomial t models Let us look at an m-way rst-degree polynomial t model Y ij ˆ b 0 b 1 t i1 b m t im E ij 4:2 with m regression variables and l experimental conditions t i ˆ t i1 ; t i2 ;... ; t im 0, i ˆ 1; 2;... ; l. The experiment t n for sample size n has n i replications of level t i ˆ t i1 ; t i2 ;... ; t im 0. We assume now that p the experimental domain is an m-dimensional Euclidean ball of radius m, that is Tp p m ˆ ft A R m : ktk U m g. If the vectors t 1 ; t 2 ;... ; t m 1 ful ll the conditions 1 t 0 i t i ˆ m 1; 1 t 0 i t j ˆ 0 4:3 for all i 0 j U m 1, then the vectors span a convex body called a regular simplex. The vertices t i belong to the boundary sphere of the ball T p m. For m ˆ 2 the support points t 1 ; t 2 ; t 3 span an equilateral triangle on the sphere of T p 2. A design which places uniform weight 1= m 1 on the vertices t 1 ; t 2 ;... ; t m 1 of a regular simplex in R m is called a regular simplex design (cf. Pukelsheim 1993, p. 391). For all m V 1 it is always possible to choose vectors t 1 ; t 2 ;... ; t m 1 such that they ful ll the conditions (4.3). Putting equal weights 1 m 1 to them yields a regular simplex design t with M 1 t ˆ I m 1. Thus a regular simplex design always exists. Note that any rotation of a regular simplex design is also a regular simplex design. The smallest possible support size of a feasible design for the LSE of b ˆ b 0 ; b 1 ;... ; b m 0 in an m-way rst-degree model is l ˆ m 1, because b has m 1 components. Theorem 4.2. Let T m 1 be the set of designs t with support size l ˆ m 1 in the m-way rst-degree model (4.2) on the experimental domain T p m. Then a design t A T m 1 is DS-optimal if and only if it is a regular simplex design. Proof. Let t A T m 1 be a design for the LSE of b with support size l ˆ m 1. Hence t ˆ ft 1 ; t 2 ;... ; t m 1 ; p 1 ; p 2 ;... ; p m 1 g, where t 1 ; t 2 ;... ; t m 1 are distinct vectors from T p m with positive weights p 1 ; p 2 ;... ; p m 1 such that Pm 1 p i ˆ 1. For such a design M 1 t ˆ X 0 DX; D ˆ diag p 1 ; p 2 ;... ; p m 1 and the model matrix X is square. Since jx 0 DXj ˆ jdxx 0 j, it follows from Hadamard's inequality ( Horn and Johnson 1987, p. 477) that jm 1 t j ˆ l 1 l 2... l m 1 U Ym 1 p i 1 ti 0 t i ; 4:4 where l 1 ; l 2 ;... ; l m 1 are the eigenvalues of M 1 t and p i 1 ti 0t i, i ˆ 1; 2;... ; m 1 are the diagonal elements of DXX 0. Equality holds if and only if the matrix XX 0 is diagonal, that is if 1 ti 0t j ˆ 0 for all i 0 j U m 1.
12 204 E. P. Liski et al. Now the arithmetic-geometric mean inequality Ym 1 p i U 1 m 1 m 1 and the conditions t 0 i t i U m; i ˆ 1; 2;... ; m 1, yield the inequality Ym 1 p i 1 t 0 i t i U 1: 4:5 Equality holds if and only if p 1 ˆ p 2 ˆ ˆ p m 1 ˆ 1 m 1 and all t i lie on the boundary sphere of T p m. By Corollary 3.3 c e M 1 t Š U P wm 1 2 U d2 l ~ 4:6 p for all e > 0, where d ˆ n e=s and l ~ ˆ l1 l 2... l m 1 1= m 1. The right hand side of (4.6) is an increasing function of the bound d 2 l. ~ Therefore (4.4), (4.5) and (4.6) yield the inequalities c e M 1 t Š U P w 2 m 1 U d2 ~ l U P w 2 m 1 U d 2 4:7 for all e > 0. Let t A T m 1 be DS-optimal. It therefore maximizes c e M 1 t Š for all e > 0. If c e M 1 t Š attains its maximum P wm 1 2 U d2 for all e > 0, then it follows from (4.4), (4.5) and (4.7) that the support points t 1 ; t 2 ;... ; t m 1 ful ll the conditions (4.3) and p 1 ˆ p 2 ˆ ˆ p m 1 ˆ 1, i.e. t is a regular simplex m 1 design. On the other hand, for a regular simplex design t A T m 1 we have M 1 t ˆ I m 1. But then c e M 1 t Š ˆ P wm 1 2 U d2 for all e > 0 and hence t is DSoptimal. This completes the proof of the theorem. r If m ˆ 1, then we have the line t model Y ij ˆ b 0 b 1 t i E ij 4:8 with experimental domain T ˆ 1; 1Š. It follows from the theorem of De la Garza (1954) and Theorem 4.2 that the design t 1=2 ˆ f 1; 1; 1 2g, which assigns weight 1 2 to the points 1 and 1, is the unique DS-optimal design for the LSE of b ˆ b 0 ; b 1 0 in (4.8). The same result with the help of a di erent technique was obtained in Liski, Luoma, Mandal and Sinha (1998). An m 1 m 1 matrix X with entries 1 and 1 is called a Hadamard matrix if X 0 X ˆ m 1 I m 1. Thus there exists a two-level regular simplex design (and therefore a two-level DS-optimal design) with levels G1 if and only if there is a Hadamard matrix of order m 1. If m ˆ 2, for example, there is no Hadamard matrix of order 3 and there exists no two-level regular simplex design with levels G1 in a 2-way rst-order polynomial t model.
13 Distance optimality design criterion in linear models 205 Corollary 4.1. Let T l be the set of designs t with support size l V m 1 in the m-way rst-degree model (4.2) on the experimental domain T p m. Then a design t A T l is DS-optimal if M 1 t ˆ I m 1. Proof. Let t ˆ ft 1 ; t 2 ;... ; t l ; p 1 ; p 2 ;... ; p l g A T l be a design for the LSE of b. Since tr M 1 t ˆ tr X 0 DX, we have by the properties of trace that tr X 0 DX ˆ tr DXX 0 ˆ Xl p i 1 t 0 i t i U m 1: The upper bound is attained if and only if ti 0t i ˆ m for all i ˆ 1; 2;... ; l. Clearly, under the constraint tr M 1 t ˆ l 1 l 2 l m 1 U m 1, jm 1 t j ˆ l 1 l 2... l m 1 attains its maximum if and only if l 1 ˆ l 2 ˆ ˆ l m 1 ˆ 1. Consequently, jm 1 t j has its maximum for a design t if M 1 t ˆ I m 1. By the same argument as in the last paragraph of Theorem 4.2, we can conclude that t is DS-optimal if M 1 t ˆ I m 1. r As shown in Theorem 4.2, M 1 t ˆ I m 1 holds for a regular simplex design t. Hence a design t satisfying the condition M 1 t ˆ I m 1 exists for the minimal feasible support size l ˆ m 1. Existence of a DS-optimal design in general, for any given pair of positive integers m and l V m 1, seems to be an unsolved problem. Nevertheless, a DS-optimal design can be found for certain values of m and l V m 1. For example, the complete factorial design 2 m is a DS-optimal design with l ˆ 2 m. It is a two-level design which assigns uniform weight 1=2 m to each of the l ˆ 2 m vertices of the m dimensional cube 1; 1Š m H T p m. The moment matrix of the complete factorial design 2 m is I m 1, and consequently it is a DS-optimal design for the LSE of b. But if we have an m-way rst-degree model without a constant term, Y ij ˆ b 1 t i1 b m t im E ij ; 4:9 then a DS-optimal design on T p m always exists. This result follows from Chow and Lii (1993), where D-optimal designs were constructed for the LSE of b ˆ b 1 ; b 2 ;... ; b m 0 in the model (4.9). It can be shown, using Corollary 4.1, that these D-optimal designs are also DS-optimal. 5 D-, E- and DS(e)-optimality To open this section we consider a class of designs for mean parameters of a simple second-degree polynomial t model. It is an example of a set of designs where no DS-optimal design exists. The main result of this section, Theorem 5.1, shows that the classical D- and E-criteria follow from the DS(e)-criterion as e approaches 0 and y, respectively. 5.1 Quadratic regression without the rst-degree term Let us now consider the parabola t model E Y ij Š ˆ b 0 b 1 t 2 i 5:1
14 206 E. P. Liski et al. over the experimental domain T ˆ 1; 1Š. If we denote z i ˆ ti 2, then the experimental domain of z is T z ˆ 0; 1Š. We may thus seek DS-optimal design for the LSE of b ˆ b 0 ; b 1 0 over T z ˆ 0; 1Š. Let t be a design with l V 2 distinct experimental levels z 1 ; z 2 ;... ; z l on T z ˆ 0; 1Š. Then, by the theorem of De la Garza (1954), it is always possible to nd a two-point design t p ˆ fa; b; pg such that M 1 t ˆ M 1 t p ; where p and q ˆ 1 p are weights at the points b and a, respectively, and a; b A 0; 1Š (cf. Pukelsheim 1993, Claim 10.7). Thus the moment matrix of t p is M 1 t p ˆ q 1 a a a 2 p 1 b b b 2 ˆ 1 qa pb qa pb qa 2 pb 2 : However, it is likewise always possible to choose a competing design t p ˆ f0; 1; p g with p ˆ qa pb, which is at least as good as t p in the Loewner ordering sense. This claim can be checked straightforwardly by noting that M 1 t p M 1 t p ˆ q a a 2 p b b 2 is nonnegative de nite, since q a a 2 p b b 2 V 0 for all a; b A 0; 1Š. As DS-optimality criterion is isotonic relative to Loewner ordering, we have the inequality c e M 1 t p Š V c e M 1 t p Š for all e > 0: Therefore t p is at least as good as t p with respect to the DS-optimality criterion. From this it follows that we may restrict ourselves to the class of two-point designs ~t p ˆ f0; 1; pg where p is the proportion of replications at 1. The moment matrix is of the form M 1 ~t p ˆ 1 p : 5:2 p p The eigenvalues of M 1 ~t p are l 1; 2 ˆ p 1 2 s p 1 2 G p 1 p : 2 The greatest eigenvalue l 1 p is a monotonically increasing function of p, while l 2 p attains its maximum at p ˆ 2 5. The product of the eigenvalues l 1 p l 2 p ˆ p 1 p ˆ jm 1 ~t p j, which has the maximum at p ˆ 1 2. Hence t 2=5 ˆ f0; 1; 2 5 g is the E-optimal design and t 1=2 ˆ f0; 1; 1 2g is the D-optimal design for the plse of b ˆ b 0 ; b 1 0 over T z ˆ 0; 1Š. n Let Z ˆ s L1=2 P 0 ^b b be de ned as in Section 2. Since the eigenvalues l 1 p and l 2 p of the matrix (5.2) are functions of the weight param-
15 Distance optimality design criterion in linear models 207 Fig. 1. The graph of c d p for (a) d ˆ 1, (b) d ˆ 2, (c) d ˆ 4 eter p, the DS(e)-optimality criterion c e M 1 depends on M 1 solely through p. Hence we may write Z 2 1 c d l ˆ c d p ˆ P l 1 p Z2 2 l 2 p U d2 : Note that N n Xb; s 2 I n, and consequently Z ˆ Z 1 ; Z 2 N 2 0; I 2. Since c d is isotonic relative to Loewner ordering and l 1 p and l 2 p are increasing in 0; 2 5, we may conclude that 2 cd 5 V cd p for all p A 0; 2 5 and all d > 0. Consequently, if ~t p is DS-optimal, then p A 2 5 ; 1. We inquire now whether therep exists a DS-optimal design ~t p ˆ f0; 1; pg, p A 0; 1Š. If we de ne V i ˆ Z i = d l i, then V ˆ V1 ; V 2 0 follows a normal distribution N 2 0; d 2 L 1. The criterion c d can be written as c d p ˆ P kvk 2 U 1 p ( ) ˆ d2 l 1 p l 2 p exp d2 2p 2 v0 Lv dv; kvku1 where kvk 2 ˆ V1 2 V 2 2, v0 Lv ˆ l 1 p v1 2 l 2 p v2 2 and dv ˆ dv 1 dv 2. The function c d p is positive and continuous on 0; 1 for all d > 0 and c d 0 ˆ c d 1 ˆ 0. Hence c d p has a maximum on the interval 0; 1 at, say, p. However, the maximum point p depends on d. Consequently, there exists no DS-optimal design for the LSE of b over T z ˆ 0; 1Š in the model (5.1). 5.2 Characterization of D- and E-optimal designs using DS(e)-optimality Let l ˆ l 1 ; l 2 ;... ; l k 0 and g ˆ g 1 ; g 2 ;... ; g k 0 be the vectors of eigenvalues of the positive de nite matrices M x 1 and M x 2, respectively. The entries of
16 208 E. P. Liski et al. l and g are arranged in decreasing order. Since M x 1 and M x 2 are positive de nite, l k > 0 and g k > 0. A design x 1 is at least as good as a design x 2, with respect to the DS(e)-criterion, if c e M x 1 V c e M x 2 ; or equivalently; c d l V c d g p for d ˆ p n e=s > 0. Denote Vi ˆ Z i = d li, i ˆ 1; 2;... ; k and V ˆ V 1 ; V 2 ;... ; V k 0. Then V follows a k-variate normal distribution N k 0; d 2 L 1. We may therefore write c d l using (2.4) as follows: c d l ˆ P kvk 2 U 1 ˆ d k 2p k=2 jlj 1=2... kvku1 exp! d2 2 v0 Lv dv; 5:3 where kvk 2 ˆ V 2 1 V 2 2 V 2 k, v ˆ v 1; v 2 ;... ; v k 0 and dv ˆ dv 1 dv 2... dv k. For any xed l with l k > 0 we may consider c d l as a function of d. Theorem 5.1 utilizes the following lemma. Lemma 5.1. Let w1 2 and w2 k denote chi-squared random variables with 1 and k k > 1 degrees of freedom, respectively. Let a and b be positive real numbers and let d 0 ˆ d 0 a; b; k be su½ciently large. Then the following statements hold: (i) If a U b, then P wk 2 U ad < P w2 1 U bd for all d > 0. (ii) If a > b, then P wk 2 U ad > P w2 1 U bd for all d > d 0. Proof. It is clear that P wk 2 > c > P w2 1 > c for all c > 0; since wk 2 ˆ Pk Zi 2 and w1 2 ˆ Z2 1. Therefore, a U b implies P w2 k U ad < P w1 2 U bd for all d > 0. Thus we have proved the part (i). To prove the part (ii) we consider the quotient q d; a; b ˆ Hk d; a H 1 d; b ; where the functions and H k d; a ˆ P w 2 k > ad ˆ c k H 1 d; b ˆ P w 2 1 > bd ˆ c 1 y ad y bd x k=2 1 e x=2 dx > 0 x 1=2 e x=2 dx > 0
17 Distance optimality design criterion in linear models as functions of d are de ned on 0; y and a; b; c k ˆ G k and c 2 2 k=2 1 ˆ p 1 2p are positive constants. We prove the lemma by showing that q d; a; b! 0 as d! y for a > b. The limit of the quotient q d; a; b is indeterminate of the form 0=0 as d! y. The functions H k and H 1 are di erentiable and their derivatives with respect to d are and H 0 k d; a ˆ c k ad k=2 1 e ad=2 a H 0 1 d; b ˆ c 1 bd 1=2 e bd=2 b; respectively. Therefore we test the quotient of derivatives: Hk 0 d; a =H 1 0 d; b ˆ cd k 1 =2 e b a d=2, where c is a constant not depending on d. Then we have Hk 0 H1 0 d; a! 0 as d! y if and only if a > b: d; b Hence, by L'Hospital's rule, H k d; a! 0 as d! y for a > b: 5:4 H 1 d; b Consequently, for su½ciently large d by (5.4) H k d; a ˆ 1 P wk 2 U ad < H 1 d; b ˆ 1 P w1 2 U bd for a > b: This completes our proof. r We are now in a position to characterize the DS(e)-criterion, when e approaches 0 and y, respectively. These limiting cases have an interesting relationship with the traditional D- and E-optimality criteria. Theorem 5.1. Let l, g A R k denote vectors whose components are the eigenvalues of the moment matrices M x 1 and M x 2, respectively, arranged in decreasing order. Then the following statements hold: (a) If c d l V c d g for all su½ciently small d > 0, then jm x 1 j V jm x 2 j; if jm x 1 j > jm x 2 j, then c d l > c d g for all su½ciently small d > 0. (b) If c d l V c d g for all su½ciently large d, then l k V g k ; if l k > g k, then c d l > c d g for all su½ciently large d. Proof. We rst prove part a. In view of (5.3) the inequality c d l V c d g can be written as!! jlj 1=2... exp d2 2 v0 Lv dv V jgj 1=2... exp d2 2 v0 Gv dv; kvku1 kvku1 5:5
18 210 E. P. Liski et al. where L and G are the diagonal matrices of the eigenvalues of M x 1 and M x 2, respectively. Both integrals... kvku1 exp! d2 2 v0 Lv dv and... kvku1 exp! d2 2 v0 Gv dv approach p G k=2 k 2 1 as d! 0. Consequently, if (5.5) holds for all su½ciently small d > 0, then jlj 1=2 V jgj 1=2, or equivalently jm x 1 j ˆ jlj V jgj ˆ jm x 2 j. On the other hand, if jlj ˆ jm x 1 j > jm x 2 j ˆ jgj, then (5.5) with a strict inequality holds for all su½ciently small d > 0. This completes the proof of part (a). The proof of part (b). Since g i V g k > 0 g 1 i Theorem 3.1 yields the inequality c d g ˆ P Z2 1 Z2 2 Z2 k U d 2 g 1 g 2 g k V P Z2 1 Z2 2 Z2 k U d 2 g k g k g k U g 1 k for all i ˆ 1; 2;... ; k, ˆ P w 2 k U g kd 2 ; 5:6 where N k 0; I k and Z1 2 Z2 2 Z2 k ˆ w2 k is a chi-squared random variable with k degrees of freedom. On the other hand, we have the inequality P w1 2 U l kd 2 ˆ P Z2 k U d 2 l k > P Z2 1 Z2 2 Z2 k U d 2 l 1 l 2 l k ˆ c d l ; where w1 2 is a chi-squared random variable with 1 degree of freedom. If c d l V c d g for all su½ciently large d, then by (5.6) and (5.7) P w 2 1 U l kd 2 > c d l V c d g V P w 2 k U g kd 2 5:7 for all d > d 0, where d 0 is su½ciently large. Since P w 2 1 U l kd 2 > P w 2 k U g kd 2 for all d > d 0, it follows from Lemma 5.1 that l k V g k. Now let us assume that l k > g k. The same arguments used in derivation of the inequalities (5.6) and (5.7) also yield the inequalities c d l V P w 2 k U l kd 2 5:8 and P w 2 1 U g kd 2 > c d g : 5:9 If l k > g k, then Lemma 5.1 yields the inequality
19 Distance optimality design criterion in linear models 211 P w 2 k U l kd 2 > P w 2 1 U g kd 2 5:10 for su½ciently large d. Hence the inequality c d l > c d g follows from (5.8)± (5.10) for all su½ciently large d. This completes the proof of part (b). r Note that nothing can be said about the relationship between c d l and c d g for all su½ciently small d > 0, if jm x 1 j ˆ jm x 2 j in part (a) of Theorem 5.1. Theorem 5.1 also shows that the DS(e)-criterion is equivalent to the D-criterion as e! 0, and the DS(e)-criterion is equivalent to the E-criterion as e! y. These conclusions are due to the fact that both D- and E-optimal designs are unique (cf. Hoel 1958, Pukelsheim and Studden 1993). Acknowledgements: This paper was completed while the rst author was a senior research fellow of the Academy of Finland and the second author a doctoral student at Tampere Graduate School in Information Science and Engeneering. The work was started when the third author was visiting the University of Tampere in 1997 and a substantial part of the paper was prepared during his second visit in He would like to thank his colleagues for their excellent hospitality. Both visits were supported by the Academy Finland project and partly by the University of Tampere. The authors are grateful to Bikas K. Sinha who brought the distance optimality criterion to our attention. References Anderson TW (1955) The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities. Proceedings of the American Mathematical Society 6:170± 176 Birnbaum ZW (1948) On random variables with comparable peakedness. Annals of Mathematical Statistics 19:76±81. Chow KL, Lii K-S (1993) D-optimal designs with Hadamard matrix. Journal of Mathematical Analysis and Applications 179:76±90 De la Garza A (1954) Spacing of information in polynomial estimation. Annals of Mathematical Statistics 25:123±130 Hoel PG (1958) E½ciency problems in polynomial estimation. Annals of Mathematical Statistics 29:1134±1145 Horn RA, Johnson CR (1987) Matrix analysis. Cambridge University Press, New York Liski EP, Luoma A, Mandal NK, Sinha BK (1998) Pitman nearness, distance criterion and optimal regression designs. Calcutta Statistical Association Bulletin 48(191±192):179±194 Marshall AW, Olkin I (1979) Inequalities: Theory of majorization and its applications. Academic Press, New York Pukelsheim F (1993) Optimal design of experiments. John Wiley & Sons, Inc., New York Pukelsheim F, Studden WJ (1993) E-optimal designs for polynomial regression. Annals of Statistics 21, 1:402±415 Shah KR, Sinha BK (1989) Theory of optimal designs. Springer±Verlag Lecture Notes in Statistics Series, No. 54 Sherman S (1955) A theorem on convex sets with applications. Annals of Mathematical Statistics 26:763±766 Sinha BK (1970) On the optimality of some designs. Calcutta Statistical Association Bulletin 20:1±20 SteËpniak C (1989) Stochastic ordering and Schur-convex functions in comparison of linear experiments. Metrika 36:291±298 Tong YL (1990) The multivariate normal distribution. Springer±Verlag, New York
Stochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationECON2285: Mathematical Economics
ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More information3 Random Samples from Normal Distributions
3 Random Samples from Normal Distributions Statistical theory for random samples drawn from normal distributions is very important, partly because a great deal is known about its various associated distributions
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationAlbert W. Marshall. Ingram Olkin Barry. C. Arnold. Inequalities: Theory. of Majorization and Its Applications. Second Edition.
Albert W Marshall Ingram Olkin Barry C Arnold Inequalities: Theory of Majorization and Its Applications Second Edition f) Springer Contents I Theory of Majorization 1 Introduction 3 A Motivation and Basic
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationMath 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank
Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs
More informationWarped Products. by Peter Petersen. We shall de ne as few concepts as possible. A tangent vector always has the local coordinate expansion
Warped Products by Peter Petersen De nitions We shall de ne as few concepts as possible. A tangent vector always has the local coordinate expansion a function the di erential v = dx i (v) df = f dxi We
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More information4.3 - Linear Combinations and Independence of Vectors
- Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be
More informationMATHEMATICAL PROGRAMMING I
MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationREGULAR A-OPTIMAL SPRING BALANCE WEIGHING DESIGNS
REVSTAT Statistical Journal Volume 10, Number 3, November 01, 33 333 REGULAR A-OPTIMAL SPRING BALANCE WEIGHING DESIGNS Author: Ma lgorzata Graczyk Department of Mathematical and Statistical Methods, Poznań
More informationAnalysis of Variance and Design of Experiments-II
Analysis of Variance and Design of Experiments-II MODULE VIII LECTURE - 36 RESPONSE SURFACE DESIGNS Dr. Shalabh Department of Mathematics & Statistics Indian Institute of Technology Kanpur 2 Design for
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 436 01 874 887 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Maximal determinant
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationNear convexity, metric convexity, and convexity
Near convexity, metric convexity, and convexity Fred Richman Florida Atlantic University Boca Raton, FL 33431 28 February 2005 Abstract It is shown that a subset of a uniformly convex normed space is nearly
More informationPANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1
PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,...,
More informationinformation matrix can be defined via an extremal representation. For rank
Metrika manuscript No. (will be inserted by the editor) Information matrices for non full rank subsystems Pierre Druilhet 1, Augustyn Markiewicz 2 1 CREST-ENSAI, Campus de Ker Lann, Rue Blaise Pascal,
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More information460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses
Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:
More informationLetting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.
1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationLecture 7: General Equilibrium - Existence, Uniqueness, Stability
Lecture 7: General Equilibrium - Existence, Uniqueness, Stability In this lecture: Preferences are assumed to be rational, continuous, strictly convex, and strongly monotone. 1. Excess demand function
More informationSparsity of Matrix Canonical Forms. Xingzhi Zhan East China Normal University
Sparsity of Matrix Canonical Forms Xingzhi Zhan zhan@math.ecnu.edu.cn East China Normal University I. Extremal sparsity of the companion matrix of a polynomial Joint work with Chao Ma The companion matrix
More informationMath 341: Convex Geometry. Xi Chen
Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry
More informationStudent Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker
Student Solutions Manual for Web Sections Elementary Linear Algebra th Edition Stephen Andrilli David Hecker Copyright, Elsevier Inc. All rights reserved. Table of Contents Lines and Planes and the Cross
More informationLECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM
LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM Contents 1. The Atiyah-Guillemin-Sternberg Convexity Theorem 1 2. Proof of the Atiyah-Guillemin-Sternberg Convexity theorem 3 3. Morse theory
More informationWidely applicable periodicity results for higher order di erence equations
Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu
More informationLinear Operators Preserving the Numerical Range (Radius) on Triangular Matrices
Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:
More informationMean-Variance Utility
Mean-Variance Utility Yutaka Nakamura University of Tsukuba Graduate School of Systems and Information Engineering Division of Social Systems and Management -- Tennnoudai, Tsukuba, Ibaraki 305-8573, Japan
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationCOMPLETE CLASS RESULTS FOR LINEAR REGRESSION OVER THE MULTI-DIMENSIONAL CUBE
COMPLETE CLASS RESULTS FOR LINEAR REGRESSION OVER THE MULTI-DIMENSIONAL CUBE Friedrich PUKELSHEIM, Cornell University* BU-942-Mt September 25, 1987 ABSTRACT Complete classes of designs and of moment matrices
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationStochastic Processes
Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space
More informationOn Kusuoka Representation of Law Invariant Risk Measures
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationarxiv: v1 [math.ca] 7 Jan 2015
Inertia of Loewner Matrices Rajendra Bhatia 1, Shmuel Friedland 2, Tanvi Jain 3 arxiv:1501.01505v1 [math.ca 7 Jan 2015 1 Indian Statistical Institute, New Delhi 110016, India rbh@isid.ac.in 2 Department
More informationStochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.
Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationRobust Solutions to Multi-Objective Linear Programs with Uncertain Data
Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationInterpolating the arithmetic geometric mean inequality and its operator version
Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,
More informationECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b)
ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE : BASICS KRISTOFFER P. NIMARK. Bayes Rule De nition. Bayes Rule. The probability of event A occurring conditional on the event B having
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationTHE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. Sunday, March 14, 2004 Time: hours No aids or calculators permitted.
THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION Sunday, March 1, Time: 3 1 hours No aids or calculators permitted. 1. Prove that, for any complex numbers z and w, ( z + w z z + w w z +
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationZero controllability in discrete-time structured systems
1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state
More informationLecture Notes Part 2: Matrix Algebra
17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent
More informationAnother algorithm for nonnegative matrices
Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationBias Correction in the Balanced-half-sample Method if the Number of Sampled Units in Some Strata Is Odd
Journal of Of cial Statistics, Vol. 14, No. 2, 1998, pp. 181±188 Bias Correction in the Balanced-half-sample Method if the Number of Sampled Units in Some Strata Is Odd Ger T. Slootbee 1 The balanced-half-sample
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationSOUTH AFRICAN TERTIARY MATHEMATICS OLYMPIAD
SOUTH AFRICAN TERTIARY MATHEMATICS OLYMPIAD. Determine the following value: 7 August 6 Solutions π + π. Solution: Since π
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationNorms and Perturbation theory for linear systems
CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector
More informationSome notes on Coxeter groups
Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationJoint Statistical Meetings - Section on Statistics & the Environment
Robust Designs for Approximate Regression Models With Correlated Errors Douglas P. Wiens Department of Mathematical and tatistical ciences University of Alberta, Edmonton, Alberta, Canada T6G G1 doug.wiens@ualberta.ca
More informationA Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common
Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 23-45 ISSN 1538-7887 A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when
More informationEconomics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation
Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen
More information(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1
Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More informationRESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices
Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationContents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains
Ring Theory (part 4): Arithmetic and Unique Factorization in Integral Domains (by Evan Dummit, 018, v. 1.00) Contents 4 Arithmetic and Unique Factorization in Integral Domains 1 4.1 Euclidean Domains and
More informationAn alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems
Applied Mathematics and Computation 109 (2000) 167±182 www.elsevier.nl/locate/amc An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity
More informationStrictly Positive Definite Functions on a Real Inner Product Space
Strictly Positive Definite Functions on a Real Inner Product Space Allan Pinkus Abstract. If ft) = a kt k converges for all t IR with all coefficients a k 0, then the function f< x, y >) is positive definite
More informationThe spectra of super line multigraphs
The spectra of super line multigraphs Jay Bagga Department of Computer Science Ball State University Muncie, IN jbagga@bsuedu Robert B Ellis Department of Applied Mathematics Illinois Institute of Technology
More information1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)
Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty
More informationMC3: Econometric Theory and Methods. Course Notes 4
University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew
More informationRichard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,
The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationECON0702: Mathematical Methods in Economics
ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 14, 2009 Luo, Y. (SEF of HKU) MME January 14, 2009 1 / 44 Comparative Statics and The Concept of Derivative Comparative Statics
More information(x, y) = d(x, y) = x y.
1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationNon-absolutely monotonic functions which preserve non-negative definiteness
Non-absolutely monotonic functions which preserve non-negative definiteness Alexander Belton (Joint work with Dominique Guillot, Apoorva Khare and Mihai Putinar) Department of Mathematics and Statistics
More informationMinimax design criterion for fractional factorial designs
Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationDS-Optimal Designs. Rita SahaRay Theoretical Statistics and Mathematics Unit, Indian Statistical Institute Kolkata, India
Design Workshop Lecture Notes ISI, Kolkata, Noember, 25-29, 2002, pp. 33-49 DS-Optimal Designs Rita SahaRay Theoretical Statistics and Mathematics Unit, Indian Statistical Institute Kolkata, India 1 Introduction
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More information8 Periodic Linear Di erential Equations - Floquet Theory
8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions
More informationALGEBRAIC COMBINATORICS. c1993 C. D. Godsil
ALGEBRAIC COMBINATORICS c1993 C. D. Godsil To Gillian Preface There are people who feel that a combinatorial result should be given a \purely combinatorial" proof, but I am not one of them. For me the
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationSIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011
SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS
More informationChapter 5 Linear Programming (LP)
Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider
More informationDetailed Proof of The PerronFrobenius Theorem
Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand
More informationLecture 2: Review of Prerequisites. Table of contents
Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this
More informationImproved Upper Bounds for the Laplacian Spectral Radius of a Graph
Improved Upper Bounds for the Laplacian Spectral Radius of a Graph Tianfei Wang 1 1 School of Mathematics and Information Science Leshan Normal University, Leshan 614004, P.R. China 1 wangtf818@sina.com
More information