Handout #5 Computation of estimates, ANOVA Table and Orthogonal Designs
|
|
- Randolf Owen
- 6 years ago
- Views:
Transcription
1 Hout 5 Computation of estimates, ANOVA Table Orthogonal Designs In this hout e ill derive, for an arbitrary design, estimates of treatment effects in each stratum. Then the results are applied to completely romized designs, romized complete block designs Latin squares. 1. Computation of estimates in a given stratum Project the romization model (4.1) (4.2) in Hout 4 onto f, 1, e obtain the folloing linear model: ith WCœ W\ α W%, (5.1) E ÐW % œ 0 cov ÐW% œ 0 W, (5.2) here (5.2) follos from cov ÐW% œ WZ W œ WÐ 4œ! 04W4W œ 0W (since Á 4 Ê WW 4 œ!. Our goal is to compute the best linear unbiased estimator of any estimable treatment function - α in f, i.e., the estimator of - α ith minimum variance among unbiased estimators of the form +WC. When restricted to the vectors in f, the covariance matrix 0W is the same as 0M: œ 0@ for f. Therefore the Gauss-Markov Theorem applies, the least squares estimators are the best linear unbiased estimators. From (.) (.4) in Hout, e have Lemma 5.1 A treatment function - α is estimable in stratum f (i.e., b an unbiased estimator of the form +WC ) if only if - eð\ W\. If -α is estimable in stratum f, then its best linear unbiased estimator in f is - α^, here α^ is any solution of the normal equation \ W \ α^ œ \ WC, (5.) var Ð ^ - α œ 0 - Ð\ W \ -. Proof. The design matrix in model (5.1) is \ œ W \ ; so e have Ð\ \ œ ÐW\ W\ œ \ W\, the right-h side of the normal equation is Ð\ WC œ ÐW\ WC œ \ WC. The matrix \W\ is called the information matrix for treatment effects in stratum f. It has the folloing properties: = Lemma 5.2. The information matrix \ W \ is symmetric, nonnegative definite has zero ro sums. Proof. Symmetry nonnegative-definiteness are straightforard. To sho that it has zero ro sums, e note that by the definition of \, each of its ros has exactly one 1, all the other entries are 0's; therefore \ 1 œ 1, \ W \ 1 œ \ W 1 œ 0. R R -1-
2 For convenience, let G œ \ W \ (5.4) U œ \ W C. (5.5) Then (5.) becomes G α^ œ U. (5.6) Lemma 5.2 shos that rankðg Ÿ 1, that if - is estimable, then it must be a contrast. If rankðg œ 1, then e say that the design is connected in stratum f. In this case, all the treatment contrasts are estimable in. f α 2. ANOVA in a given stratum Since E ÐC œ. 1 \ α eð\ œg, under (5.1), EÐWCœWE ÐC W( g ). Let TW Ðg WC be the orthogonal projection of the data vector WC onto W( g ). Then, since W( g ) is the range of W\, from Hout, TW Ðg WC œöðw\ ÒÐW\ ÐW\ Ó ÐW\ WC œ ÐW\ Ð\ W\ \ WC. So T W C W Ðg œòðw \ Ð\ W \ \ WCÓÐW \ Ð\ W \ \ WC The residual is T f W Ðg œ CW\ Ð\ W\ \ WC (5.7) ^α œ ( ) U. (5.8) œ ( ^ α ) G α^ (5.9) W C, e have lwcl œ T WC T WC. (5.10) W Ðg f W Ðg Formula (5.10) gives the ANOVA in stratum f. The first term is the treatment sum of squares the second term is the residual sum of squares. No, EÐT W C f W Ðg œ EÐCWT WC f W Ðg œòe ÐC) ÓW T W EÐC trðw T W Z f W Ðg f W Ðg œ tr( WT W WZ ) [Since WE ÐC W( g, hich is orthogonal to f WÐg ] f Ðg -2-
3 œ tr( 0 WT W) (Since Z œ 0 W) f WÐg 4 4 4œ! œ tr Ð0 T (Since f W ( g ) f, e have T W œ P ) f W Ðg f W Ðg f W Ðg œ 0 Ödim( f ) dim ÒW Ðg Ó. EÐT Similarly, W C W Ðg W Ðg W Ðg œðt EÐW C trðw T W Z œ α \ W\ Ð\ W\ \ W\ α tr Ð0TW Ðg [In (5.7), replace WC ith E ÐWC, since Ðg f ] œ α Ð\ W \ α 0 dim[ W Ðg ] œ α G α 0 dim[ W Ðg ]. S = Summarizing the above discussion, e have the folloing ANOVA ithin stratum f : Sources Sums of Squares d.f. MS E(MS) Treatments ( α^ ) α^ dim[ ( )] ( α^ ) α^ G W g G 0 α G α.7òw Ðg Ó.7ÒW Ðg Ó Rresidual By subtraction 0 Total lwcl dim( f ) Note that rankðg œ rank Ð\ W \ ) œ dim[ W ( g )]. If the design is connected in f then the treatment sum of squares has 1 degrees of freedom in f.. Orthogonal designs If g Z f for some, then e say that e have an orthogonal design. In this case, for all 4Á, W4( g ) œ{ 0}, dimòw4ðg Óœ0, the treatment contrasts can only be estimated in f. We therefore drop the superscript from α^. In the folloing, e shall assume that < 4 0 for all 4 œ 1, á,, here < 4 is the number of replications of the 4 th treatment. Then dim( g Z) œ, hence all the treatment contrasts are estimable in. Furthermore, f W ( g ) œ W [ Z Š ( g Z)] œ W ( g Z) œ g Z, (5.11) --
4 for each B, W B œ W ÐKB W ÐT B œ W ÐT B œ T B. (5.12) g Z Z Z The last equality holds because TZ B g Z g Z f. It follos from (5.12) that W\ œ T Z \, hence G œ \ W \ œ \ T Z \ (5.1) Z U œ \ WCœ ÐW\ Cœ ÐT \ C œ \ T C. (5.14) Z Consider the folloing one-ay layout model Cœ. \ α %, E Ð% œ 0 cov Ð% œ 5 M. ~ ~ ~ ~ By (.8) in Hout, estimates of α are solutions of \\ α^ œ \C ~, here \ œ T Z \, ~ ~ ~ ~. Then ~ C œ T Z C \ \ \ C are the same as G U in (5.1) (5.14), respectively. Therefore hen g Z f, the best linear unbiased estimator - α^ of a treatment contrast is the same as under the above one-ay layout model, i.e., _ - α^ œ -4 4, (5.&) _ here 4 is the average of all the observations on the 4th treatment. In the folloing, e sho that By Lemma 5.1, e have var( - α^ ) œ 0 4 <. (5.16) œ \ W \? for some? (5.17) var( ^ - α) œ 0 - (\ W \ ) -. (5.18) By the same argument as in the proof of Lemma 5.2, W\ K? œ 0, because \ K? has constant entries. It follos that - œ \ W \ Ð? K?. So by replacing? in (5.17) ith? K?, e may assume? Z. Then \? 1 œ 0, i.e., \? Z. Since \? g, e have \? g Z. Recall that g Z f. Therefore \? f, hence W \? œ \?. Then - œ \ W \? œ \ \? œ??, here? is the diagonal matrix hose 4 th diagonal entry is < 4. Hence From (5.18), var( ^ - α) œ 0 - (\ W \ ) -? œ? -. (5.19) œ 0? (\ W \ )(\ W \ ) - [by (5.17)] -4-
5 œ 0?- [ CœE Dis a solution of ECœ Dif a solution exists] œ 0 (-? -), [by (5.19)] hich is (5.16). Also, the treatment sum of squares W Ðg W Ðg W Ðg W Ðg T W C œ T C [Since W ( g ) f, T W œ T ] œ lt g Z Cl [By (5.11)] œ lð KC l _ R œ < [ ( C )]. 4 4 R 6 6œ The above discussion shos that if estimators simple ANOVA: g Z f for some, then e have simple variation Sums of Squares degrees of freedom Mean square E(MS).7Ð f lwcl dim( f ) lwcl 0 ã ã ã ã ã f f _ R _ Treatments [ ( )] 1 [ < C ã 0 < ( α α) ] 4 4 R œ Residual lwcl [ _ < 4 4 R R Ð C6Ó dim( f) 1 ã 0 6œ ã ã ã ã ã = = =.7Ð = = f lwcl dim( f ) lwcl 0 Total lc KCl R 1 f = -5-
6 _ In the above table, α œ < α. R 4 4 No e specialize these results to three simple orthogonal designs: completely romized designs, romized complete block designs, Latin squares. In a completely romized design, the to strata are f! œ Z f œ Z. Obviously g Z Z ; so it is an orthogonal design. In a romized complete block design, each treatment appears in each block exactly once. The condition of proportional frequencies is satisfied by the treatment block factors. It follos from the Theorem in Hout 2 that ( g Z) ( U Z). Also g Z Z. Therefore g Z U ( œ f.) This establishes the orthogonality of a romized complete block design. In a Latin square (or more generally, romized ro-column designs in hich all the treatments appear equally often in each ro equally often in each column), the condition of proportional frequencies is satisfied by the treatment ro factors, also by the treatment column factors. Thus ( g Z) ( e Z) ( g Z) ( V Z). As a result, g Z ( e V) ( œ f$ ). Therefore for these designs, estimators of treatment contrasts their variances are given by (5.15) (5.16). Their ANOVA tables follo. ANOVA table for a completely romized design: variation Sums of Squares degrees of freedom Mean square E(MS) Treatments [ ] 1 [ < C. ã 0 < ( α α) ] residual By subtraction 0 R. œ Total ( C C) R 1 ANOVA table for a romized complete block design: variation Sums of Squares d.f. Mean square E(MS),,...,... 0 œ œ Blocks C ( C ), 1 C ( C) _ Treatments [ ] 1 [, C ã 0,( α α) ] Residual By subtraction 0, Total ( C C ), 1 œ
7 ANOVA table for a Latin square design: variation Sums of Squares d.f. Mean square E(MS) œ œ Ros C ( C ) 1 C ( C) Columns C ( C ) 1 C ( C) _ Treatments [ ] 1 [ C ã 0 ( α α) ] 6.. $ 6. 6œ 6œ Residual By subtraction 0 $ Total ( C C ) 1 œ
From Handout #1, the randomization model for a design with a simple block structure can be written as
Hout 4 Strata Null ANOVA From Hout 1 the romization model for a design with a simple block structure can be written as C œ.1 \ X α % (4.1) w where α œðα á α > Ñ E ÐÑœ % 0 Z œcov ÐÑ % with cov( % 3 % 4
More informationExamples of non-orthogonal designs
Examples of non-orthogonal designs Incomplete block designs > treatments,, blocks of size 5, 5 > The condition of proportional frequencies cannot be satisfied by the treatment and block factors. ¾ g Z
More information236 Chapter 4 Applications of Derivatives
26 Chapter Applications of Derivatives Î$ &Î$ Î$ 5 Î$ 0 "Î$ 5( 2) $È 26. (a) g() œ ( 5) œ 5 Ê g () œ œ Ê critical points at œ 2 and œ 0 Ê g œ ± )(, increasing on ( _ß 2) and (!ß _), decreasing on ( 2 ß!)!
More informationEXST Regression Techniques Page 1 SIMPLE LINEAR REGRESSION WITH MATRIX ALGEBRA
EXST7034 - Regression Techniques Page 1 SIMPLE LINEAR REGRESSION WITH MATRIX ALGEBRA MODEL: Y 3 = "! + "" X 3 + % 3 MATRIX MODEL: Y = XB + E Ô Y" Ô 1 X" Ô e" Y# 1 X# b! e# or Ö Ù = Ö Ù Ö Ù b ã ã ã " ã
More informationProperties of the least squares estimates
Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares
More informationCHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum
CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 19 CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION 3.0. Introduction
More information3 - Vector Spaces Definition vector space linear space u, v,
3 - Vector Spaces Vectors in R and R 3 are essentially matrices. They can be vieed either as column vectors (matrices of size and 3, respectively) or ro vectors ( and 3 matrices). The addition and scalar
More informationModèles stochastiques II
Modèles stochastiques II INFO 154 Gianluca Bontempi Département d Informatique Boulevard de Triomphe - CP 1 http://ulbacbe/di Modéles stochastiques II p1/50 The basics of statistics Statistics starts ith
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationHere are proofs for some of the results about diagonalization that were presented without proof in class.
Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented
More informationLinear Regression Linear Regression with Shrinkage
Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle
More informationT i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a. A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r )
v e r. E N G O u t l i n e T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r ) C o n t e n t s : T h i s w o
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationLecture VIII Dim. Reduction (I)
Lecture VIII Dim. Reduction (I) Contents: Subset Selection & Shrinkage Ridge regression, Lasso PCA, PCR, PLS Lecture VIII: MLSC - Dr. Sethu Viayakumar Data From Human Movement Measure arm movement and
More informationLinear Regression Linear Regression with Shrinkage
Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle
More informationÐ"Ñ + Ð"Ñ, Ð"Ñ +, +, + +, +,,
Handout #11 Confounding: Complete factorial experiments in incomplete blocks Blocking is one of the important principles in experimental design. In this handout we address the issue of designing complete
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationProduct Measures and Fubini's Theorem
Product Measures and Fubini's Theorem 1. Product Measures Recall: Borel sets U in are generated by open sets. They are also generated by rectangles VœN " á N hich are products of intervals NÞ 3 Let V be
More informationEXST Regression Techniques Page 1. We can also test the hypothesis H :" œ 0 versus H :"
EXST704 - Regression Techniques Page 1 Using F tests instead of t-tests We can also test the hypothesis H :" œ 0 versus H :" Á 0 with an F test.! " " " F œ MSRegression MSError This test is mathematically
More informationy(x) = x w + ε(x), (1)
Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued
More information1 :: Mathematical notation
1 :: Mathematical notation x A means x is a member of the set A. A B means the set A is contained in the set B. {a 1,..., a n } means the set hose elements are a 1,..., a n. {x A : P } means the set of
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationETIKA V PROFESII PSYCHOLÓGA
P r a ž s k á v y s o k á š k o l a p s y c h o s o c i á l n í c h s t u d i í ETIKA V PROFESII PSYCHOLÓGA N a t á l i a S l o b o d n í k o v á v e d ú c i p r á c e : P h D r. M a r t i n S t r o u
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationSTAT 540: Data Analysis and Regression
STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State
More informationEXCERPTS FROM ACTEX CALCULUS REVIEW MANUAL
EXCERPTS FROM ACTEX CALCULUS REVIEW MANUAL Table of Contents Introductory Comments SECTION 6 - Differentiation PROLEM SET 6 TALE OF CONTENTS INTRODUCTORY COMMENTS Section 1 Set Theory 1 Section 2 Intervals,
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationEconomics 620, Lecture 4: The K-Varable Linear Model I
Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system
More informationThese notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.
NOTES ON AUTONOMOUS ORDINARY DIFFERENTIAL EQUATIONS MARCH 2017 These notes give a quick summary of the part of the theory of autonomous ordinary differential equations relevant to modeling zombie epidemics.
More informationEconomics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N
1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N
More informationThéorie Analytique des Probabilités
Théorie Analytique des Probabilités Pierre Simon Laplace Book II 5 9. pp. 203 228 5. An urn being supposed to contain the number B of balls, e dra from it a part or the totality, and e ask the probability
More informationBias Correction in the Balanced-half-sample Method if the Number of Sampled Units in Some Strata Is Odd
Journal of Of cial Statistics, Vol. 14, No. 2, 1998, pp. 181±188 Bias Correction in the Balanced-half-sample Method if the Number of Sampled Units in Some Strata Is Odd Ger T. Slootbee 1 The balanced-half-sample
More informationQuantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017
Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand
More information4.7 Confidence and Prediction Intervals
4.7 Confidence and Prediction Intervals Instead of conducting tests we could find confidence intervals for a regression coefficient, or a set of regression coefficient, or for the mean of the response
More informationLecture 8 January 30, 2014
MTH 995-3: Intro to CS and Big Data Spring 14 Inst. Mark Ien Lecture 8 January 3, 14 Scribe: Kishavan Bhola 1 Overvie In this lecture, e begin a probablistic method for approximating the Nearest Neighbor
More informationChemometrics. Matti Hotokka Physical chemistry Åbo Akademi University
Chemometrics Matti Hotokka Physical chemistry Åbo Akademi University Linear regression Experiment Consider spectrophotometry as an example Beer-Lamberts law: A = cå Experiment Make three known references
More informationSuggestions - Problem Set (a) Show the discriminant condition (1) takes the form. ln ln, # # R R
Suggetion - Problem Set 3 4.2 (a) Show the dicriminant condition (1) take the form x D Ð.. Ñ. D.. D. ln ln, a deired. We then replace the quantitie. 3ß D3 by their etimate to get the proper form for thi
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple
More informationSIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS. Ping Sa and S.J.
SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS " # Ping Sa and S.J. Lee " Dept. of Mathematics and Statistics, U. of North Florida,
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationThe general linear model (and PROC GLM)
The general linear model (and PROC GLM) Solving systems of linear equations 3"! " " œ 4" " œ 8! " can be ritten as 3 4 "! œ " 8 " or " œ c. No look at the matrix 3. One can see that 3 3 3 œ 4 4 3 œ 0 0
More informationLecture 34: Properties of the LSE
Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E
More informationOne-way ANOVA. Experimental Design. One-way ANOVA
Method to compare more than two samples simultaneously without inflating Type I Error rate (α) Simplicity Few assumptions Adequate for highly complex hypothesis testing 09/30/12 1 Outline of this class
More informationLecture 6: Linear models and Gauss-Markov theorem
Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables
More informationX n = c n + c n,k Y k, (4.2)
4. Linear filtering. Wiener filter Assume (X n, Y n is a pair of random sequences in which the first component X n is referred a signal and the second an observation. The paths of the signal are unobservable
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationMMSE Equalizer Design
MMSE Equalizer Design Phil Schniter March 6, 2008 [k] a[m] P a [k] g[k] m[k] h[k] + ṽ[k] q[k] y [k] P y[m] For a trivial channel (i.e., h[k] = δ[k]), e kno that the use of square-root raisedcosine (SRRC)
More informationExample: A Markov Process
Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationIt's Only Fitting. Fitting model to data parameterizing model estimating unknown parameters in the model
It's Only Fitting Fitting model to data parameterizing model estimating unknown parameters in the model Likelihood: an example Cohort of 8! individuals observe survivors at times >œ 1, 2, 3,..., : 8",
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationOn Least Squares Linear Regression Without Second Moment
On Least Squares Linear Regression Without Second Moment BY RAJESHWARI MAJUMDAR University of Connecticut If \ and ] are real valued random variables such that the first moments of \, ], and \] exist and
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationSTAT 350: Geometry of Least Squares
The Geometry of Least Squares Mathematical Basics Inner / dot product: a and b column vectors a b = a T b = a i b i a b a T b = 0 Matrix Product: A is r s B is s t (AB) rt = s A rs B st Partitioned Matrices
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationMath 249B. Geometric Bruhat decomposition
Math 249B. Geometric Bruhat decomposition 1. Introduction Let (G, T ) be a split connected reductive group over a field k, and Φ = Φ(G, T ). Fix a positive system of roots Φ Φ, and let B be the unique
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More informationHomework Set 2 Solutions
MATH 667-010 Introduction to Mathematical Finance Prof. D. A. Edards Due: Feb. 28, 2018 Homeork Set 2 Solutions 1. Consider the ruin problem. Suppose that a gambler starts ith ealth, and plays a game here
More informationFitting Linear Statistical Models to Data by Least Squares II: Weighted
Fitting Linear Statistical Models to Data by Least Squares II: Weighted Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling April 21, 2014 version
More informationNext is material on matrix rank. Please see the handout
B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0
More informationbe a deterministic function that satisfies x( t) dt. Then its Fourier
Lecture Fourier ransforms and Applications Definition Let ( t) ; t (, ) be a deterministic function that satisfies ( t) dt hen its Fourier it ransform is defined as X ( ) ( t) e dt ( )( ) heorem he inverse
More informationQUEEN MARY, UNIVERSITY OF LONDON
QUEEN MARY, UNIVERSITY OF LONDON MTH634 Statistical Modelling II Solutions to Exercise Sheet 4 Octobe07. We can write (y i. y.. ) (yi. y i.y.. +y.. ) yi. y.. S T. ( Ti T i G n Ti G n y i. +y.. ) G n T
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationRegression coefficients may even have a different sign from the expected.
Multicolinearity Diagnostics : Some of the diagnostics e have just discussed are sensitive to multicolinearity. For example, e kno that ith multicolinearity, additions and deletions of data cause shifts
More informationINTRODUCTORY ECONOMETRICS
INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More informationMultiple Linear Regression
Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from
More informationMath 131 Exam 4 (Final Exam) F04M
Math 3 Exam 4 (Final Exam) F04M3.4. Name ID Number The exam consists of 8 multiple choice questions (5 points each) and 0 true/false questions ( point each), for a total of 00 points. Mark the correct
More information18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013
18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are
More informationVariance. Standard deviation VAR = = value. Unbiased SD = SD = 10/23/2011. Functional Connectivity Correlation and Regression.
10/3/011 Functional Connectivity Correlation and Regression Variance VAR = Standard deviation Standard deviation SD = Unbiased SD = 1 10/3/011 Standard error Confidence interval SE = CI = = t value for
More informationAnalysis of variance using orthogonal projections
Analysis of variance using orthogonal projections Rasmus Waagepetersen Abstract The purpose of this note is to show how statistical theory for inference in balanced ANOVA models can be conveniently developed
More informationLecture 4. Random Effects in Completely Randomized Design
Lecture 4. Random Effects in Completely Randomized Design Montgomery: 3.9, 13.1 and 13.7 1 Lecture 4 Page 1 Random Effects vs Fixed Effects Consider factor with numerous possible levels Want to draw inference
More informationSimple Linear Regression
Simple Linear Regression MATH 282A Introduction to Computational Statistics University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/ eariasca/math282a.html MATH 282A University
More informationImproving AOR Method for a Class of Two-by-Two Linear Systems
Alied Mathematics 2 2 236-24 doi:4236/am22226 Published Online February 2 (htt://scirporg/journal/am) Imroving AOR Method for a Class of To-by-To Linear Systems Abstract Cuixia Li Shiliang Wu 2 College
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationPart I consists of 14 multiple choice questions (worth 5 points each) and 5 true/false question (worth 1 point each), for a total of 75 points.
Math 131 Exam 1 Solutions Part I consists of 14 multiple choice questions (orth 5 points each) and 5 true/false question (orth 1 point each), for a total of 75 points. 1. The folloing table gives the number
More informationSUPPORTING INFORMATION. Line Roughness. in Lamellae-Forming Block Copolymer Films
SUPPORTING INFORMATION. Line Roughness in Lamellae-Forming Bloc Copolymer Films Ricardo Ruiz *, Lei Wan, Rene Lopez, Thomas R. Albrecht HGST, a Western Digital Company, San Jose, CA 9535, United States
More information13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares
13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1
MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical
More informationHomework 6 Solutions
Homeork 6 Solutions Igor Yanovsky (Math 151B TA) Section 114, Problem 1: For the boundary-value problem y (y ) y + log x, 1 x, y(1) 0, y() log, (1) rite the nonlinear system and formulas for Neton s method
More information{η : η=linear combination of 1, Z 1,, Z n
If 3. Orthogonal projection. Conditional expectation in the wide sense Let (X n n be a sequence of random variables with EX n = σ n and EX n 0. EX k X j = { σk, k = j 0, otherwise, (X n n is the sequence
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationPossible numbers of ones in 0 1 matrices with a given rank
Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai
More information1 Cricket chirps: an example
Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number
More informationFramework for functional tree simulation applied to 'golden delicious' apple trees
Purdue University Purdue e-pubs Open Access Theses Theses and Dissertations Spring 2015 Framework for functional tree simulation applied to 'golden delicious' apple trees Marek Fiser Purdue University
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More informationCSC Metric Embeddings Lecture 8: Sparsest Cut and Embedding to
CSC21 - Metric Embeddings Lecture 8: Sparsest Cut and Embedding to Notes taken by Nilesh ansal revised by Hamed Hatami Summary: Sparsest Cut (SC) is an important problem ith various applications, including
More informationRegression With a Categorical Independent Variable
Regression ith a Independent Variable ERSH 8320 Slide 1 of 34 Today s Lecture Regression with a single categorical independent variable. Today s Lecture Coding procedures for analysis. Dummy coding. Relationship
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More information