Associated Hypotheses in Linear Models for Unbalanced Data

Size: px
Start display at page:

Download "Associated Hypotheses in Linear Models for Unbalanced Data"

Transcription

1 University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 5 Associated Hypotheses in Linear Models for Unbalanced Data Carlos J. Soto University of Wisconsin-Milwaukee Follow this and additional works at: Part of the Mathematics Commons, and the Statistics and Probability Commons Recommended Citation Soto, Carlos J., "Associated Hypotheses in Linear Models for Unbalanced Data" (5). Theses and Dissertations. Paper 84. This Thesis is brought to you for free and open access by UWM Digital Commons. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of UWM Digital Commons. For more information, please contact kristinw@uwm.edu.

2 ASSOCIATED HYPOTHESES I LIEAR MODELS FOR UBALACED DATA by Carlos J. Soto A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Mathematics at The University of Wisconsin-Milwaukee May 5

3 ABSTRACT ASSOCIATED HYPOTHESES I LIEAR MODELS FOR UBALACED DATA by Carlos J. Soto The University of Wisconsin-Milwaukee, 5 Under the Supervision of Professor Dr. Jay Beder When looking at factorial experiments there are several natural hypotheses that can be tested. In a two-factor or a b design, the three null hypotheses of greatest interest are the absence of each main effect and the absence of interaction. There are two ways to construct the numerator sum of squares for testing these, namely either adjusted or sequential sums of squares (also known as type I and type III in SAS). Searle has pointed out that, for unbalanced data, a sequential sum of squares for one of these hypotheses is equal (with probability ) to an adjusted sum of squares for a non-standard associated hypothesis. In his view, then, sequential sums of squares may test the wrong hypotheses. We give an exposition of this topic to show how to derive the hypothesis associated to a given sequential sum of squares. ii

4 TABLE OF COTETS Introduction Linear Models. otation Least Squares Hypotheses Restrictions Hypothesis in Linear Models 6 3. Linear Hypothesis ested Hypothesis and Sequential Sum of Squares Associated Hypothesis Some Associated Hypotheses 9 4. Rows Columns Additivity Conclusion 9 REFERECES 3 iii

5 LIST OF TABLES 4. Soil Data iv

6 ACKOWLEDGMETS The author wishes to thank Professor Jay Beder and my committee members Professor Jugal K Ghorai and Professor Vytaras Brazauskas. Also I d like to thank my peers, brothers and friends, especially Andrew Kaldunski, who have all supported me. v

7 Introduction When looking at factorial experiments there are several natural hypotheses that can be tested. In a two-factor or a b design, the three null hypotheses of greatest interest are the absence of each main effect and the absence of interaction. We are under the assumption that no cell in our experiment is empty and further the cells could be unbalanced, that is that, they do not necessarily have the same sample size in each cell. To test our hypothesis we first fit a linear model. Then there are two ways to construct the numerator sum of squares for testing these hypotheses, namely either adjusted or sequential sums of squares (also known as type I and type III in SAS). In Linear Models [, Section 7.(f)] Searle derives the hypotheses associated with various sequential sum of squares in a a b design. He has pointed out that, for unbalanced data, a sequential sum of squares for one of these hypotheses is equal (with probability ) to an adjusted sum of squares for a non-standard associated hypothesis. In his view, then, sequential sums of squares may test the wrong hypotheses. Some of the hypotheses are not directly given in terms of the cell means µ ij, and the computations are somewhat complicated and ad hoc. We then give an exposition to build up to Theorem 3.3 and Corollary 3., and then derive from them hypotheses associated with sequential sum of squares in a a b design. Linear models are first explored and the usual methods of testing hypotheses are given. Several theorems that are presented without proofs are from Linear Models and Design [].

8 Linear Models For testing hypotheses we will be fitting a linear full model of E(Y ) = Xβ and comparing it to a restricted model as that under our null hypothesis, H. To fit our model the method of least squares will be used. For the full and restricted models the sum of squared errors are denoted as SSE and SSE R respectively. If SSE and SSE R are significantly close to each other, then we will not reject H. It s important to note that SSE R SSE since a restricted model s error can t be less than a full model. Thus, if SSE R is significantly greater than SSE we will reject H which is equivalent to SSE R /SSE being significantly large.. otation Working with an a b design there is several useful notation which we ll be using. First, let p = ab which is the number of cells. Let µ ij denote the mean of the cell ij. ext, n i is used to denote the sum of observations in the ith row. That is n i = b j= n ij. Similarly for columns n j = a observations is = n = b j= i= n ij. Thus the total number of a i= n ij = b j= n j = a i= n i In general when the lower and upper bounds of summations are obvious the following notation will be used: i = a i= and j = b j=.. Least Squares Since the method of least squares will be used, a few notes about the method will be made. However, since the method is quite standard, derivations of most equations are omitted. Let X denote the p design matrix. Let Y represent the vector of observed values and let Ŷ represents the vector of fitted values, both of which are. The

9 3 method of least squares is that which minimizes SSE := Y Ŷ = j= (Y j Ŷj) To minimize the SSE, Ŷ = X(X X) X Y and since Ŷ = X ˆβ then ˆβ = (X X) X Y. Throughout this paper, the hat matrix is defined to be P = X(X X) X. The matrix P is a projection matrix from R to R where is the total number observations as defined in the previous section. It is useful to think of R as the observation space. Ŷ = P Y is the orthogonal projection of Y onto V := R(X), the column space of X..3 Hypotheses In a a b design there are six main hypotheses of interest: only A present, only B present, absence of each main effect and absence of interaction. The hypotheses of only A present consists of the equations µ = = µ b µ = = µ b. µ a = = µ ab Let ρ i denote the ith row mean. That is ρ i = b b j= µ ij. A natural hypotheses would be H : ρ = ρ = = ρ a, that is all rows have the same mean. This is referred to as A effect absent. The hypotheses of only B present consists of the equations µ = = µ a

10 4 µ = = µ a. µ b = = µ ab Let γ j denote the jth column mean. That is γ j = a a i= µ ij. A natural hypotheses would be H : γ = γ = = γ b, that is all columns have the same mean. This is referred to as B effect absent. Another main hypothesis tested is that of interaction of A and B, defined to be lack of additivity. The factors A and B are considered to be additive if a change from row i to i can be achieved by adding the same constant to each cell mean of row i. ote that the constant need not be the same amongst different pairs of rows. Whilst it is true that additivity holds between any pair of rows it s more useful to equivalently consider additivity between row and row i, i =,..., a. The hypothesis of additivity consists of the (a )(b ) equations µ ij µ j = µ i µ, i =,..., a, j =,..., b The last main hypothesis tested is that of no effect present, that is, H : µ ij equal for all ij..4 Restrictions In our hypothesis of interest some sort of restriction or linear constraint is made on β where β R p. It is useful to think of R p as the parameter space of the model. A hypothesis which makes a linear constraint is considered a linear hypothesis. There are several ways in which a linear constraint can be written and few are given below.

11 5 Lemma.. The following are equivalent (i) There exists a subspace U R p such that β U (ii) There exists a subspace W R p such that β W (iii) There exists a matrix W and a vector β such that β = W β U is a set of contrast vectors. From this Lemma it s important to note that W and U are orthogonal complements of each other. Since the hypotheses we will be testing are defined by a linear constraint then it s important to know how the model acts under such a linear constraint. It seems natural that a linear model under a linear constraint should still be a linear model. We thus have the following theorem Theorem.. If E(Y ) = Xβ and if β is subject to a linear constraint β W, then the constrained model is also a linear model E(Y ) = X β where X = XW and appropriate β. We will be assuming that X has full rank and thus W (X) = (), where (X) is the nullspace of X, that is (X) = {c Xc = }. Therefore X has full rank as well.

12 6 3 Hypothesis in Linear Models 3. Linear Hypothesis We have already shown how to fit the unrestricted model and now that we know how to restrict our β then fitting a restricted model is similar in method. Let H : β W be our null hypothesis, where W is a subspace of R p and let V := X(W ), a subspace of V. To fit the restricted model, we find the value of Ŷ V that minimizes the distance to Y. Thus, Ŷ is the orthogonal projection of Y onto V, and we define SSE R := Y Ŷ. At this point, we have Y, Ŷ, and Ŷ and since Ŷ, Ŷ V = R(X) then we can see that these three vectors form a right triangle. Thus we have, Y Ŷ = Y Ŷ + Ŷ Ŷ. If we denote SS(H ) = Ŷ Ŷ (notation will become apparent shortly) then SSE R = SSE + SS(H ) SSE R SSE = + SS(H ) SSE Recall that we will reject H when SSE R /SSE is significantly large. Thus this is equivalent to rejecting when SS(H ) SSE for testing H. is large. Thus, we call SS(H ) the sum of squares 3. ested Hypothesis and Sequential Sum of Squares Thus far, we have been interested in testing a single linear hypothesis. Generally an analysis of variance tests several hypotheses. Several interesting things occur when

13 7 there are more than one hypothesis and we ll begin to explore this. Definition 3.. A hypothesis H () is nested in hypothesis H () if H () implies H (). Since in our linear model β R p, then from Lemma. a general hypothesis H (i) may be expressed as β W (i) for an appropriate subspace W (i) R p. Thus H () is nested in H () if and only if W W. Applying Lemma., H (i) may be expressed as β U (i). Thus H () is nested in H () if and only if U U. The choice of H is the simplest possible hypothesis model. Generally this is the mean of all cells being equal. In general, suppose W W which thus means the hypothesis β W is nested in β W. Let V i = X(W i ) be the image of W i so thus V = R(X) the column space of X. Thus V V V Theorem 3.. We have Ŷ Ŷ V V and Ŷ Ŷ V V. In particular, Y Ŷ, Ŷ Ŷ and Ŷ Ŷ are pairwise orthogonal. Suppose we wanted to test β W, given that β W. ote that Y Ŷ = Y Ŷ + Ŷ Ŷ + Ŷ Ŷ = Y Ŷ + Ŷ Ŷ As when we defined SS(H ), if we divided both sides by SSE = Y Ŷ then, Y Ŷ SSE = + Ŷ Ŷ SSE Thus Ŷ Ŷ is appropriate for testing β W, given that β W. We thus denote this as SS(β W β W ) = Ŷ Ŷ

14 8 Keeping with this notation Ŷ Ŷ = Ŷ Ŷ + Ŷ Ŷ Ŷ Ŷ = Ŷ Ŷ Ŷ Ŷ SS(β W β W ) = SS(β W ) SS(β W ) Since SS(β W ) = Ŷ Ŷ is the simplest model, it is generally referred to as the sum of squares for the model. Thus since SS(β W ) = S(β W β W ) + SS(β W ), then S(β W β W ) and SS(β W ) are referred to as sequential sums of squares. This is so because if we started with Y Ŷ and add SS(β W ) = Ŷ Ŷ and SS(β W β W ) = Ŷ Ŷ sequentially, then we build the sum of squares for the model. To generalize Theorem 3. we can consider H : β W i where W W W k R p and the W i are distinct. Then V V V. Theorem 3.. We have Ŷ Ŷk V V k and Ŷj Ŷj V j V j. In particular, Y Ŷ, Ŷ Ŷk, Ŷk Ŷk,..., Ŷ Ŷ are pairwise orthogonal. 3.3 Associated Hypothesis Definition 3.. Given a hypothesis H () nested in H (), the hypothesis H is associated to SS(H () H () ) and SS(H () H () ) = SS(H ). Theorem 3.3. Consider the model E(Y ) = Xβ, where X p has full rank, and let W W R p. Then there is a unique subspace W satisfying SS(β W ) = SS(β W β W ), and we have df(β W ) = df(β W β W ).

15 9 The subspace is given by W = R(T P ), where T = (X X) X and P is defined as follows. Let V i = X(W i ), V = R(X) be the column space of X, and let P be P i be the orthogonal projections of R on V and on V i, respectively. Then P = P P + P. Corollary 3.. The subspace U is given by U = (P T ) 4 Some Associated Hypotheses Throughout the rest of this paper instead of working with an a b design, we ll be working with a 3 design. 4. Rows Theorem 4.. Consider the hypotheses H () :all µ ij equal and H () :Only A present. Clearly H () is nested in H (). The hypothesis associated with the sequential sum of squares SS(H () H () ) is H : ρ = ρ = = ρ a where ρ i = n i j n ijµ ij. Thus let H () : µ = µ = µ 3 = µ = µ = µ 3 and H () : µ = µ = µ 3 and µ = µ = µ 3. The hypothesis associated with SS(H () H () ) of is H : ρ = ρ. Proof. We need P = P P + P by Theorem 3.3. Let k be the k vector of s, let J n m be the n m matrix of s, and J n the n n matrix of s. For our unrestricted full model we have

16 n n X = n3 n n n3 A B C Thus P = X(X X) X = where D E F A = n J n B = n J n C = n 3 J n3 D = n J n E = n J n F = n 3 J n3 Under our null hypothesis of equality of means in rows we want µ = µ = µ 3 and µ = µ = µ 3. We have two equations so we have two free parameters, say µ and µ. So by Theorem. we want a matrix W such that µ µ µ 3 µ µ µ 3 = W µ µ

17 We can see that W = since Therefore, we have X = XW. µ µ µ µ µ µ = W µ. µ Thus X = n n and P = X (X X ) X = A B where A = n J n and B = n J n. For the simplest model we have µ = µ = µ = µ = µ 3 = µ 3. We have one free parameter, say µ. By Theorem. so we need a W such that µ µ µ 3 µ µ µ 3 [ = W µ ].

18 It is clear that, W = since µ µ µ µ µ µ = W [ µ ] Therefore, we have X = XW =. Further P = X (X X ) X = J Since P is a projection matrix from R to R, all P matrices are matrices. We have P = P P + P. The only matrix we need now is T where T = (X X) X =. n n n n T = n 3 n3 n n n n n 3 n3 Remark 4.. The very next step is to calculate P T and then according to Corollary 3.. find its nullspace. Let us introduce the notation C(P T ) which we ll think of as the core of our matrix. The matrix P T has several repeated rows, in fact only 6 unique rows. Thus 6 rows can be eliminated when finding the nullspace. The 6 unique rows compose C(P T ) and thus C(P T ) is 6 6. To picture P T, the first row of C(P T ) is repeated n times, the second row is repeated n times, the third is repeated n 3 times, etc. ote that (P T ) = (C(P T )) since eliminated repeated rows do not effect the nullspace. C(P T ) = A J 3 B J 3 6

19 3 where A = n n + n n n + n n n n n n 3 n + and B = n n + n n n + n n n n n n 3 n + The next step is to put the matrix in reduced row echelon form. It s important to note that the Gaussian elimination method is not efficient and should be avoided if replicating this result. The reduced matrix is as follows. n n n n 3 n n n n 3 n 3n n n 3. n n 3 n n 3

20 4 ext to find the nullspace we need to solve n n n n 3 n n n n 3 n 3n n n 3 n n 3 n n 3 c c c 3 c c c 3 =. Thus we have c = n n n n 3 c 3 c = n n n n 3 c 3 c 3 = n 3n n n 3 c 3 c = n n 3 c 3 c = n n 3 c 3 If we let c 3 = n 3 n, then c = n n c = n n c 3 = n 3 n. c = n n c = n n Thus n n µ + n n µ + n 3 n µ 3 n n µ n n µ n 3 n µ 3 = n n µ + n n µ + n 3 n µ 3 = n n µ + n n µ + n 3 n µ 3

21 5 n [n µ + n µ + n 3 µ 3 ] = n [n µ + n µ + n 3 µ 3 ] Recall that ρ i = n i j n ijµ ij. Therefore, the associated hypothesis is ρ = ρ, as claimed. Example 4.. Lets see the hypothesis H : ρ = ρ as applied to the following experiment. Consider the following data: Searle claims that the sequential sum of Variety Soil 3 6,, 3, 5 4,, 5, 9, 8 3 8, 9, Table 4.: This is a table of soil and variety squares for Soil only tests the hypothesis (3µ 7 +µ +µ 3 ) = (4µ 8 +µ +3µ 3 ). Using the previous theorem to derive this hypothesis we have 3 X = 4 3

22 6 Using statistical software we get P T = As we can see the first row is repeated n = 3 times, the fourth row is repeated n = times and etc. Since our total sample size is = 5 we ended up with a 5 6 matrix. The core of P T is C(P T ) =

23 7 Thus we have c = 8c 7 3 c = 6c 3 c 3 = 6c 3 c = 4 3 c 3 c = c 3 3 If we let c 3 = n 3 n = 3 8 then c = 8 7 ( 3 8 ) = 3 7 c = 6 ( 3) = 8 7 c 3 = 6( 3) = 8 7 c = 4 3 ( 3 8 ) = 4 8 c = ( 3) = Therefore H : 3µ 7 + µ 7 + µ 7 3 4µ 8 µ 8 3µ 8 3 =. So, H : 3µ 7 + µ 7 + µ 7 3 = 4µ 8 + µ 8 + 3µ 8 3, or H : (3µ 7 + µ + µ 3 ) = (4µ 8 + µ + 3µ 3 ), as Searle claims. 4. Columns Theorem 4.. Consider the hypotheses H () :all µ ij equal and H () :Only B present. Clearly H () is nested in H (). The hypothesis associated with the sequential sum of squares SS(H () H () ) is H : γ = γ = = γ b where γ i = n j i n ijµ ij. Thus let H () : µ = µ = µ 3 = µ = µ = µ 3 and H () : µ = µ, µ = µ, and µ 3 = µ 3. The hypothesis associated with SS(H () H () ) of is H : γ = γ = γ 3.

24 8 Proof. First we will note that the full model and simplest model are the same as in the hypothesis of row equality in the previous section. Thus P, P, and T are the same as in the previous section. Under our null hypothesis of equality of means in columns we want µ = µ, µ = µ, and µ 3 = µ 3. We have three equations so we have three free parameters, say µ, µ, and µ 3. So by Theorem. we want a matrix W such that µ µ µ 3 µ µ µ 3 = W µ µ µ 3 Thus we can see that W =

25 9 since Thus we have µ µ µ 3 µ µ µ 3 = W µ µ µ 3. n n X = XW = n3. n n n3 Further J n J n n n J n J n n n J P = X (X X ) X = n3 J n3 n 3 n 3 J n J n n n J n J n n n J n3 J n3 n 3 n 3 [ Again, P T has 6 unique rows and C(P T ) = A B ]

26 where and A = B = n + n n + n n n 3 + n 3 n n n + n n 3 n n n + n 3 n 3 + n 3 The matrix C(P T ) in reduced row echelon form is as follows: n n n 3 n n n n n 3 n n n 3 n 3 n n n 3 n n n n n 3

27 Thus to find the nullspace of C(P T ) we need to solve n n n 3 n n n n n 3 n n n 3 n 3 n n n 3 n n n n n 3 c c c 3 c c c 3 = Thus we have c = n n n n c n 3n n n 3 c 3 c = n n c c 3 = n 3 n 3 c 3 If we let c = n n and c 3 = n 3 n, then c = n n n n c n 3n c = n n n n 3 c 3 c = n n c 3 = n 3 n 3. c = n n So n n µ + n n µ + n 3 n 3 µ 3 n n µ + n n µ + n 3 n µ 3 = [( n n µ + n n µ ) ( n n µ + n n µ )] [( n n µ + n n µ ) ( n 3 n 3 µ 3 + n 3 n µ 3 )] = (n µ +n µ ) n n (n µ +n µ ) = n (n µ +n µ ) n 3 (n 3 µ 3 +n 3 µ 3 )

28 Recall that γ i = n j i n ijµ ij. Thus γ γ = γ γ 3. Similarly, if we go back to our system of equations and let c = n n and c 3 = then γ γ = Thus γ = γ and by our previous equations, γ = γ = γ 3, as claimed. Example 4.. Continuing using the data from Example 4., we will be examining the hypothesis of of variety only. C(P T ) = After reducing the matrix we get

29 3 Thus we have c = 9 7 c 5 7 c 3 c = c c 3 = 3 c 3 c = c 7 If we let c = n n = 3 and c 3 = n 3 n 3 = 3 5 then c = 6 7 c = 3 c 3 c 3 = 5 c = 8 7 Thus 6 7 µ + 3 µ + 5 µ µ + 3 µ µ 3 = 3 µ + 3 µ 3 7 µ 4 7 µ = 3 7 µ µ 5 µ µ 3 3 (µ + µ ) 7 (3µ + 4µ ) = 7 (3µ + 4µ ) 5 (µ 3 + 3µ 3 ) Recall that γ i = n j i n ijµ ij, so, γ γ = γ γ 3.

30 4 Similarly if we let c = n n = 3 and c 3 = then we get c = 3 7 c = 3 c 3 = c = 4 7 Thus 3 7 µ + 3 µ 4 7 µ + 3 µ =, or µ 3 + µ 3 = 3µ 7 + 4µ 7, so γ = γ and thus γ = γ = γ Additivity Theorem 4.3. Consider the hypotheses H () :Only A present and H () :Additivity. H () is nested in H (). Then the hypothesis associated with the sequential sum of squares SS(H () H () ) is H : γ j = n j i n ijρ i j where γ i = n j i n ijµ ij. Thus for our 3 case we have H () : µ = µ = µ 3 and µ = µ = µ 3 and H () : µ µ = µ µ and µ 3 µ 3 = µ µ. The hypothesis associated with SS(H () H () ) is H : γ = n (n ρ + n ρ ) and γ = n (n ρ + n ρ ). Proof. This hypothesis actually ending up being the most challenging to calculate, so instead of doing this with unbalanced data we will be using balanced data.

31 5 For the unrestriced model we have n n X = n n n n and J n n J n n P = X(X X) X = J n n J. n n n J n J n n For the model of A only in the balanced case we have P = X (X X ) X 3n = n. 3n n

32 6 Under our hypothesis of additivity we have µ µ = µ µ and µ 3 µ 3 = µ µ. Thus we want W such that µ µ µ 3 µ µ µ 3 = W µ µ µ 3 µ Thus we want Therefore, we have W =. n n X = XW = n n n n n n n n

33 7 and P = X (X X ) X = J 3n n J n J n J 3n n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n J n J n J 3n n J n J n 3n J n. Thus we have P = P P + P = J 3n n J n J n J 3n n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n 3n J n J n J n J 3n n J n J n 3n J n and C(P T ) = 3n 3n 3n 3n 3n 3n 3n 3n 3n 3n 3n 3n.

34 8 After some algebra we have the reduced form of. To find the nullspace of this matrix, so we want to solve c c c 3 c c c 3 = We have c = c c 3 c = c c 3 = c 3 c = c c 3. In this case, deriving the associated hypothesis isn t straightforward. So instead, we can verify that the associated hypothesis is correct. Consider γ = (ρ + ρ ) then n (nµ + nµ ) = ( 3n (nµ + nµ + nµ 3 ) + 3n (nµ + nµ + nµ 3 ))

35 9 (µ + µ ) = ( 3 (µ + µ + µ 3 ) + 3 (µ + µ + µ 3 )) µ + µ = 3 (µ + µ + µ 3 + µ + µ + µ 3 ) 3 µ 3 µ 3 µ µ 3 µ 3 µ 3 = Thus if we let c = 3 and c 3 = 3, then c = 3 c = 3 c 3 = 3, c = 3 thus verifying the associated hypothesis. 5 Conclusion We have shown to how use Theorem 3. and Corollary 3. [] to derive certain associated hypotheses, namely those for sequential sum of squares of the form SS(no ef f ect Only A present). The corresponding computations for sequential sum of squares such as SS(Only A present o interaction) turn out to be much more difficult for unbalanced data. This needs to be carried out to make the theorem truly usable.

36 3 REFERECES [] Beder, J. Linear Models and Design. Unpublished Manuscript, 4. [] Searle, S.R. Linear Models. Wiley, ew York, 97. [3] Searle, S.R. Linear Models For Unbalanced Data. Wiley, ew York, 987.

y = µj n + β 1 b β b b b + α 1 t α a t a + e

y = µj n + β 1 b β b b b + α 1 t α a t a + e The contributions of distinct sets of explanatory variables to the model are typically captured by breaking up the overall regression (or model) sum of squares into distinct components This is useful quite

More information

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES A SHORT SUMMARY OF VECTOR SPACES AND MATRICES This is a little summary of some of the essential points of linear algebra we have covered so far. If you have followed the course so far you should have no

More information

Answer Key for Exam #2

Answer Key for Exam #2 . Use elimination on an augmented matrix: Answer Key for Exam # 4 4 8 4 4 4 The fourth column has no pivot, so x 4 is a free variable. The corresponding system is x + x 4 =, x =, x x 4 = which we solve

More information

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization.

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 1 Chapter 1: Research Design Principles The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 2 Chapter 2: Completely Randomized Design

More information

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations 1 Generalised Inverse For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations (X T X)β = X T y (2) If rank of X, denoted

More information

Optimal Reinsurance Strategy with Bivariate Pareto Risks

Optimal Reinsurance Strategy with Bivariate Pareto Risks University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2014 Optimal Reinsurance Strategy with Bivariate Pareto Risks Evelyn Susanne Gaus University of Wisconsin-Milwaukee Follow

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n

More information

11 Hypothesis Testing

11 Hypothesis Testing 28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes

More information

Constructing Orthogonal Arrays on Non-abelian Groups

Constructing Orthogonal Arrays on Non-abelian Groups University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations August 2013 Constructing Orthogonal Arrays on Non-abelian Groups Margaret Ann McComack University of Wisconsin-Milwaukee Follow

More information

A Note on UMPI F Tests

A Note on UMPI F Tests A Note on UMPI F Tests Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico May 22, 2015 Abstract We examine the transformations necessary for establishing

More information

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013 18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are

More information

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35 General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Unbalanced Data in Factorials Types I, II, III SS Part 1

Unbalanced Data in Factorials Types I, II, III SS Part 1 Unbalanced Data in Factorials Types I, II, III SS Part 1 Chapter 10 in Oehlert STAT:5201 Week 9 - Lecture 2 1 / 14 When we perform an ANOVA, we try to quantify the amount of variability in the data accounted

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Inner Product, Length, and Orthogonality

Inner Product, Length, and Orthogonality Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The

More information

Statistical Analysis of Unreplicated Factorial Designs Using Contrasts

Statistical Analysis of Unreplicated Factorial Designs Using Contrasts Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Jack N. Averitt College of Graduate Studies COGS) Summer 204 Statistical Analysis of Unreplicated Factorial

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Analysis of Variance

Analysis of Variance Analysis of Variance Math 36b May 7, 2009 Contents 2 ANOVA: Analysis of Variance 16 2.1 Basic ANOVA........................... 16 2.1.1 the model......................... 17 2.1.2 treatment sum of squares.................

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Multiple comparisons - subsequent inferences for two-way ANOVA

Multiple comparisons - subsequent inferences for two-way ANOVA 1 Multiple comparisons - subsequent inferences for two-way ANOVA the kinds of inferences to be made after the F tests of a two-way ANOVA depend on the results if none of the F tests lead to rejection of

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Lec 1: An Introduction to ANOVA

Lec 1: An Introduction to ANOVA Ying Li Stockholm University October 31, 2011 Three end-aisle displays Which is the best? Design of the Experiment Identify the stores of the similar size and type. The displays are randomly assigned to

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MATH 304 Linear Algebra Lecture 20: Review for Test 1. MATH 304 Linear Algebra Lecture 20: Review for Test 1. Topics for Test 1 Part I: Elementary linear algebra (Leon 1.1 1.4, 2.1 2.2) Systems of linear equations: elementary operations, Gaussian elimination,

More information

Inferences about Parameters of Trivariate Normal Distribution with Missing Data

Inferences about Parameters of Trivariate Normal Distribution with Missing Data Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-5-3 Inferences about Parameters of Trivariate Normal Distribution with Missing

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Distribution-Free Procedures (Devore Chapter Fifteen)

Distribution-Free Procedures (Devore Chapter Fifteen) Distribution-Free Procedures (Devore Chapter Fifteen) MATH-5-01: Probability and Statistics II Spring 018 Contents 1 Nonparametric Hypothesis Tests 1 1.1 The Wilcoxon Rank Sum Test........... 1 1. Normal

More information

The Standard Linear Model: Hypothesis Testing

The Standard Linear Model: Hypothesis Testing Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:

More information

A Likelihood Ratio Test

A Likelihood Ratio Test A Likelihood Ratio Test David Allen University of Kentucky February 23, 2012 1 Introduction Earlier presentations gave a procedure for finding an estimate and its standard error of a single linear combination

More information

Ordering and Reordering: Using Heffter Arrays to Biembed Complete Graphs

Ordering and Reordering: Using Heffter Arrays to Biembed Complete Graphs University of Vermont ScholarWorks @ UVM Graduate College Dissertations and Theses Dissertations and Theses 2015 Ordering and Reordering: Using Heffter Arrays to Biembed Complete Graphs Amelia Mattern

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

Math 2030 Assignment 5 Solutions

Math 2030 Assignment 5 Solutions Math 030 Assignment 5 Solutions Question 1: Which of the following sets of vectors are linearly independent? If the set is linear dependent, find a linear dependence relation for the vectors (a) {(1, 0,

More information

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij = 20. ONE-WAY ANALYSIS OF VARIANCE 1 20.1. Balanced One-Way Classification Cell means parametrization: Y ij = µ i + ε ij, i = 1,..., I; j = 1,..., J, ε ij N(0, σ 2 ), In matrix form, Y = Xβ + ε, or 1 Y J

More information

Multivariate Regression (Chapter 10)

Multivariate Regression (Chapter 10) Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

9. The determinant. Notation: Also: A matrix, det(a) = A IR determinant of A. Calculation in the special cases n = 2 and n = 3:

9. The determinant. Notation: Also: A matrix, det(a) = A IR determinant of A. Calculation in the special cases n = 2 and n = 3: 9. The determinant The determinant is a function (with real numbers as values) which is defined for square matrices. It allows to make conclusions about the rank and appears in diverse theorems and formulas.

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in 8.6 Problem Set 3 Due Wednesday, 27 February 28 at 4 pm in 2-6. Problem : Do problem 7 from section 2.7 (pg. 5) in the book. Solution (2+3+3+2 points) a) False. One example is when A = [ ] 2. 3 4 b) False.

More information

(1) for all (2) for all and all

(1) for all (2) for all and all 8. Linear mappings and matrices A mapping f from IR n to IR m is called linear if it fulfills the following two properties: (1) for all (2) for all and all Mappings of this sort appear frequently in the

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.

3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices. 3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

18.06 Problem Set 3 Solution Due Wednesday, 25 February 2009 at 4 pm in Total: 160 points.

18.06 Problem Set 3 Solution Due Wednesday, 25 February 2009 at 4 pm in Total: 160 points. 8.6 Problem Set 3 Solution Due Wednesday, 25 February 29 at 4 pm in 2-6. Total: 6 points. Problem : Consider the matrix A = 2 4 2 6 3 4 2 7 (a) Reduce A to echelon form U, find a special solution for each

More information

Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation

Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation Daniel B Rowe Division of Biostatistics Medical College of Wisconsin Technical Report 40 November 00 Division of Biostatistics

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Linear Regression. In this lecture we will study a particular type of regression model: the linear regression model

Linear Regression. In this lecture we will study a particular type of regression model: the linear regression model 1 Linear Regression 2 Linear Regression In this lecture we will study a particular type of regression model: the linear regression model We will first consider the case of the model with one predictor

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Adapted from Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 41 Two-way ANOVA This material is covered in Sections

More information

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via. Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 38 Two-way ANOVA Material covered in Sections 19.2 19.4, but a bit

More information

Gorenstein Injective Modules

Gorenstein Injective Modules Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of 2011 Gorenstein Injective Modules Emily McLean Georgia Southern

More information

Effective Resistance and Schur Complements

Effective Resistance and Schur Complements Spectral Graph Theory Lecture 8 Effective Resistance and Schur Complements Daniel A Spielman September 28, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1 Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an

More information

Columbus State Community College Mathematics Department Public Syllabus

Columbus State Community College Mathematics Department Public Syllabus Columbus State Community College Mathematics Department Public Syllabus Course and Number: MATH 2568 Elementary Linear Algebra Credits: 4 Class Hours Per Week: 4 Prerequisites: MATH 2153 with a C or higher

More information

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set Important definitions and results 1. Algebra and geometry of vectors Definition 1.1. A linear combination of vectors v 1,..., v k R n is a vector of the form c 1 v 1 + + c k v k where c 1,..., c k R are

More information

Parameterizing orbits in flag varieties

Parameterizing orbits in flag varieties Parameterizing orbits in flag varieties W. Ethan Duckworth April 2008 Abstract In this document we parameterize the orbits of certain groups acting on partial flag varieties with finitely many orbits.

More information

STAT 8260 Theory of Linear Models Lecture Notes

STAT 8260 Theory of Linear Models Lecture Notes STAT 8260 Theory of Linear Models Lecture Notes Classical linear models are at the core of the field of statistics, and are probably the most commonly used set of statistical techniques in practice. For

More information

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1) Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ

More information

Number Theory: Niven Numbers, Factorial Triangle, and Erdos' Conjecture

Number Theory: Niven Numbers, Factorial Triangle, and Erdos' Conjecture Sacred Heart University DigitalCommons@SHU Mathematics Undergraduate Publications Mathematics -2018 Number Theory: Niven Numbers, Factorial Triangle, and Erdos' Conjecture Sarah Riccio Sacred Heart University,

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Row Space, Column Space, and Nullspace

Row Space, Column Space, and Nullspace Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space

More information

Abstract Vector Spaces and Concrete Examples

Abstract Vector Spaces and Concrete Examples LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Compositions, Bijections, and Enumerations

Compositions, Bijections, and Enumerations Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations COGS- Jack N. Averitt College of Graduate Studies Fall 2012 Compositions, Bijections, and Enumerations Charles

More information

COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure)

COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure) STAT 52 Completely Randomized Designs COMPLETELY RANDOMIZED DESIGNS (CRD) For now, t unstructured treatments (e.g. no factorial structure) Completely randomized means no restrictions on the randomization

More information

14: Hunting the Nullspace. Lecture

14: Hunting the Nullspace. Lecture Math 2270 - Lecture 14: Hunting the Nullspace Dylan Zwick Fall 2012 This is the second of two lectures covering the nulispace of a matrix A. In this lecture we move on from the what of a nulispace, and

More information

Simultaneous Diagonalization of Positive Semi-definite Matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Analysis of Variance

Analysis of Variance Analysis of Variance Blood coagulation time T avg A 62 60 63 59 61 B 63 67 71 64 65 66 66 C 68 66 71 67 68 68 68 D 56 62 60 61 63 64 63 59 61 64 Blood coagulation time A B C D Combined 56 57 58 59 60 61

More information

Rook Polynomials In Higher Dimensions

Rook Polynomials In Higher Dimensions Grand Valley State University ScholarWors@GVSU Student Summer Scholars Undergraduate Research and Creative Practice 2009 Roo Polynomials In Higher Dimensions Nicholas Krzywonos Grand Valley State University

More information

Calculation in the special cases n = 2 and n = 3:

Calculation in the special cases n = 2 and n = 3: 9. The determinant The determinant is a function (with real numbers as values) which is defined for quadratic matrices. It allows to make conclusions about the rank and appears in diverse theorems and

More information

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-016 REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING Gina

More information

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That

More information

MATH 1553 PRACTICE FINAL EXAMINATION

MATH 1553 PRACTICE FINAL EXAMINATION MATH 553 PRACTICE FINAL EXAMINATION Name Section 2 3 4 5 6 7 8 9 0 Total Please read all instructions carefully before beginning. The final exam is cumulative, covering all sections and topics on the master

More information

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43 3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for

More information

RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION

RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION Libraries Conference on Applied Statistics in Agriculture 1989-1st Annual Conference Proceedings RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION R. R. Hocking Follow this and additional works at:

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

Algorithms to Compute Bases and the Rank of a Matrix

Algorithms to Compute Bases and the Rank of a Matrix Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 308 Discussion Problems #4 Chapter 4 (after 4.3) Math 38 Discussion Problems #4 Chapter 4 (after 4.3) () (after 4.) Let S be a plane in R 3 passing through the origin, so that S is a two-dimensional subspace of R 3. Say that a linear transformation T

More information

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses. 1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately

More information