Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Size: px
Start display at page:

Download "Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson"

Transcription

1 Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring 2017

2 Outline Motivation Variable Space Observation Space What it is. Mean Covariance Global summary statistic for Ë Reading: Johnson & Wichern, Chapter 3 C.J. Anderson (Illinois) Sample Geometry Spring / 49

3 Motivation The sample Ü, Ë and Ê have geometric interpretations. The geometric interpretation shows how Ü, Ë, and Ê are related to n p. Studying these relationships provides insight into multivariate methods. Introduce a summary statistic that describes the variability in the data. This is based on geometry. C.J. Anderson (Illinois) Sample Geometry Spring / 49

4 Assumptions The geometric interpretation of sample descriptive statistics (i.e., Ü, Ë, and Ê) is based on two assumptions. (1) Each row of the data is unrelated to all of the others (i.e., independent observations). Each row corresponds to a case, individual, sampling unit. Each row is a multivariate observation (point) in p-dimensional space. X 11 X 12 X 1p X 21 X 22 X 2p n p = X n1 X. n2 Xnp (2) The joint distribution of all p variables is the same for all cases (individuals, rows). We are drawing multivariate observations from the same unchanging population. C.J. Anderson (Illinois) Sample Geometry Spring / 49

5 The Variable Space Each row of is a point in p-space or variable space. The variables (columns of ) define the axes. Consider the n points in the p dimensional space. The center of the point cloud is Ü. The variability and covariability is measured by Ë. C.J. Anderson (Illinois) Sample Geometry Spring / 49

6 The data The Variable Space: Example Ü = = ( ) = ( x 2 1 ) տ Ü What would happen to picture if multiply by 2? x 1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

7 The Observation Space Important alternative geometric representation of that considers the data as p points in an n-dimensional space. Each observation (case, individual) defines an axis or dimension of the space. Each column of is a vector in the n-space. X 11 X 12 X 1p X 21 X 22 X 2p n p = = (Ý 1, Ý 2,...,Ý p ) X n1 X n2. X np where Ý k = x 1k x 2k. for k = 1,...p. x nk C.J. Anderson (Illinois) Sample Geometry Spring / 49

8 The data The Observation Space: Example 2 4 = ( ) row 2 4 Ý Ý 2 Ý row 1 1 Ý 4 In this space, we talk about the lengths of the vector variables and about the angle between them. C.J. Anderson (Illinois) Sample Geometry Spring / 49

9 The Observation Space: The mean Question: How does Ü relate to the vectors in the observation space? Answer: The projection of the vector variable onto 1 n. Quick recall: The projection of Ü onto Ý equals ( )( ) Ü = Ü Ý Ý Ý Ý = Ý Ü Ý L Ý L Ý Ü L Ü θ Ý x C.J. Anderson (Illinois) Sample Geometry Spring / 49

10 The Mean & Projection The equal angular vector is an (n 1) vector n =. 1 Projection of Ý k = (x 1k,x 2k,...,x nk ) on the the vector 1 n = 1 1 (x 1k,x 2k,...,x nk ). ( 1 Ý k 1 n 1 n1 n ) 1 = = ( ) }{{} n j=1 x jk 1 n = x k 1 n n C.J. Anderson (Illinois) Sample Geometry Spring / 49 n 1 n

11 The Mean in the Observation Space The sample mean x k of the k th variable corresponds to the multiple of 1 n required to give the projection of Ý k onto the vector 1 n. Projection of Ý k onto 1 n : Ýk This vector is k = Ý k Ük1 n 1 n x k 1 n k is the Deviation vector. We have the decomposition of Ý k into two parts: Ý k = x }{{} k 1 n + k }{{} mean = {}}{ x k 1 n + deviation from mean {}}{ Ý k Ük1 n C.J. Anderson (Illinois) Sample Geometry Spring / 49

12 Back to our little example case 2 Ý 1 Ý 3 Ý 2 3 In the direction of 1 = (1,1) case 1 Ý 4 x 1 1 = ([0(1)+4(1)]/2)1 = (2.0)1 x 1 = 2 x 2 1 = ([1(1)+1(1)]/2)1 = (1.0)1 x 2 = 1 x 3 1 = ([2(1)+3(1)]/2)1 = (2.5)1 x 3 = 2.5 x 4 1 = ([4(1) 1(1)]/2)1 = (1.5)1 x 4 = 1.5 C.J. Anderson (Illinois) Sample Geometry Spring / 49

13 2-dimensional example continued For our 2-case example with 4 variables ( ) = Note: Ê = What is the rank of this correlation matrix? C.J. Anderson (Illinois) Sample Geometry Spring / 49

14 Deviation Vectors: Variances & Covariance Sample variance: The squared lengths of deviation vectors, n = k k = (x jk x k ) 2, L 2 k j=1 are sums of squared deviation from the mean, so s kk = 1 n L2 = 1 n (x jk x k ) 2. k n Sample covariance: The inner products between two deviation vectors, n k i = (x jk x k )(x ji x i ), j=1 j=1 are sums of cross products, so s ik = 1 n k i = 1 n (x jk x k )(x ji x i ) n j=1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

15 Deviation Vectors: Correlation The angle between two deviation vectors equals cos(θ ik ) = = i k L i L k n j=1 (x jk x k )(x ji x i ) n j=1 (x n ji x i ) 2 j=1 (x jk x k ) 2 = r ik The cosine of the angle between two deviation vectors is the sample correlation coefficient of the variables. If two deviation vectors have the same orientation (same direction), θ = 0 cos(θ) = 1 = r ik. If two deviation vectors have the opposite orientation, θ = 180 cos(θ) = 1 = r ik. If two deviation vectors are perpendicular (orthogonal), θ = 90 cos(θ) = 0 = r ik. Back to simple example (pages 12-13) C.J. Anderson (Illinois) Sample Geometry Spring / 49

16 Correlation: Simple Example 2 4 = Ê = ( ) 1 case case 1 4 C.J. Anderson (Illinois) Sample Geometry Spring / 49

17 3-Dimensional Example Suppose that p = 3, n = 3 and the data are = So Ý 1 = Ý 2 = Ý 3 = x 1 = = 620 x 2 = = 650 x 3 = = 680 C.J. Anderson (Illinois) Sample Geometry Spring / 49

18 3-Dimensional Example: Variable 1 case = (1,1,1) Ý 1 case 1 case 2 C.J. Anderson (Illinois) Sample Geometry Spring / 49

19 3-Dimensional Example: Graph of Variables case = (1,1,1) Ý 2 Ý 1 Ý 3 case 1 case 2 C.J. Anderson (Illinois) Sample Geometry Spring / 49

20 3-Dimensional Example Means are the multiples of 1 that give projection of Ý i onto Using the first variable for illustration Ý = x 11 = = = Ý 1 x 1 1 = = is perpendicular to x 1 as it should be: 1 x 1 1 = (80, 120,40) = 620( ) = 0 C.J. Anderson (Illinois) Sample Geometry Spring / 49

21 Deviation Vectors Notes (continued): Ý 1 is decomposed into two parts: If i = 0, then Ý i = x i 1. Ý 1 = x 1 1 n + 1 All of them 80 1 = = = C.J. Anderson (Illinois) Sample Geometry Spring / 49

22 Graph of 3 deviation vectors 2 case case 2 C.J. Anderson (Illinois) Sample Geometry Spring / 49

23 Variances Variance are proportional to the lengths of deviation vectors Variable 1: L 1 = = = = ns 11 = 3s 11 So s 11 = 22400/3 = and s 11 = Variable 2: L 2 = = = = 3s 22 So s 22 = 20000/3 = and s 22 = Variable 3: L 3 = = 7400 = = 3s 33 So s 33 = 7400/3 = and s 33 = C.J. Anderson (Illinois) Sample Geometry Spring / 49

24 Correlations Correlations are the cosines of the angles between deviation vectors r 12 = (0) 120( 100) +40(100) = = L 1 L 2 (149.67)(141.42) =.756 and the angle between 1 and 2 = cos 1 (.756) = r 13 = 1 3 = 80(70) 120( 40)+40( 30) = 9200 L 1 L 3 (149.67)(86.02) =.715 and the angle between 1 and 3 = cos 1 (.715) = r 23 = 2 3 = 0(70) 100( 40)+100( 30) = 1000 L 2 L 3 (141.42)(86.02) =.082 and the angle between 2 and 3 = cos 1 (.082) = C.J. Anderson (Illinois) Sample Geometry Spring / 49

25 Summary: Observation Space The axes of the observation space are defined by cases (individuals) and variables are represented as vectors. The projection of the column Ý k of the data matrix n p onto the equal angular vector 1 n is the vector x k 1. The information contained in Ë is obtained from the deviation vectors: k = Ý k x k 1 n = {(x jk x k )} The variance: L 2 k = k k = ns kk The covariance: i k = ns ik The sample correlation coefficient r ik is the cosine of the angle between i and k. C.J. Anderson (Illinois) Sample Geometry Spring / 49

26 Random Sample and Expected Values We sample from a population to learn about a phenomenon of interest We ll think about data (n cases) as a random sample of cases from some large population (real or hypothetical). For every case in the sample, we measure p variables. So The measurements of the p variables for a single case will usually be correlated or dependent. Multivariate analysis constitutes techniques designed to account for and/or study these correlations/dependencies. The measurements from different cases must be independent. This assumption can be violated a number of ways (e.g., collect observations over time, poor measurement or experimental procedures, repeated observations of same case). Lack of independence can be a big problem. C.J. Anderson (Illinois) Sample Geometry Spring / 49

27 where DATA = 1 2. n j = (X j1,x j2,...,x jp ) Each j is a random vector containing p measurements. Distances between the n points in p-space are determined by the joint probability function governing each and every j, f(üj) = f(x j1,x j2,...,x jp ) C.J. Anderson (Illinois) Sample Geometry Spring / 49

28 Important Result Let 1, 2,..., n be a random sample from the joint distribution f(üj), which has mean vector µ and covariance matrix Σ. Then is an unbiased estimator of µ has covariance matrix (1/n)Σ E( ) = µ and Cov( ) = 1 n Σ C.J. Anderson (Illinois) Sample Geometry Spring / 49

29 Estimating Covariance Matrix To estimate Σ, first note So E(Ën) = n 1 n Σ = Σ 1 n Σ n n 1 Ë n = S = 1 n 1 is an unbiased estimator of Σ. n (j Ü)(j Ü) Ë w/o a subscript has divisor (n 1) and is unbiased. Ën w/ a subscript has divisor n, biased, but is the MLE, ˆΣ. The (i,k) th element of the Ë is an unbiased estimator of σ ik : s ik = 1 n (x ji x i )(x jk x k ) n 1 j=1 j=1 s ik is not an unbiased estimator of σ ik. r ik is not an unbiased estimator of ρ ik. The amount of bias is small for large n. C.J. Anderson (Illinois) Sample Geometry Spring / 49

30 Generalized Variance The Sample covariance matrix describes the variation and covariation in and between the p variables, s 11 s 12 s 1p s 12 s 22 s 2p S = s p1 s p2 s pp where s ik = (1/(n 1)) n j=1 (x ji x i )(x jk x k ). With just one variable, we usually only need just one statistics (the sample variance) to describe the variability in the data. With p variable,need p variances and p(p 1)/2 covariances. It would be nice to have a single statistic that summarizes the information in Ë (i.e., reflects all the variances and covariance). C.J. Anderson (Illinois) Sample Geometry Spring / 49

31 Generalized Sample Variance GSV = Generalized Sample Variance = determinant of Ë = Ë Recall that the determinant of an A k k matrix is a scalar = a 11 for k = 1 = a 11 a 22 a 12 a 21 for k = 2 k = a ij A ij ( 1) i+j for k > 1 j=1 where A ij is the (k 1) (k 1) sub-matrix of where the i th row and i th column of have been deleted. C.J. Anderson (Illinois) Sample Geometry Spring / 49

32 Example: GSV Ë = GSV = 3(4(9) 3(3)) ( 1)( 1(9) 2(3))+2( 1(3) 4(2)) = 3(27)+1( 15)+2( 11) = 44 What does this mean? Two ways to interpret it (geometrically): Using observation space (n dimensional) Using variable space (p-dimensional) C.J. Anderson (Illinois) Sample Geometry Spring / 49

33 GSV: n-space interpretation The GSV = Ë is related to the volume of the parallelpiped (geometric figure) defined by the p deviation vectors. e.g., p = 2 and n = whatever The volume ( area for p = 2) 2 1 Remember i = Ý i x i 1 (Ý i is column vector of matrix ) Specifically, GSV = Ë = (n p) p (volume) 2 = (volume)2 (n p) p or volume = Ë (n p) p/2. C.J. Anderson (Illinois) Sample Geometry Spring / 49

34 Implications for Size of GSV GSV = Ë = (n p) p (volume) 2 = (volume)2 (n p) p Does the GSV ( Ë ) increase or decrease when n decreases? The length of any i = Ý i = x i 1 increases? p decreases? The angles (θ) between i s get smaller (i.e., r ik s get closer to 1)? r ik = cos(θ ik ) = i k L i L k Draw an example where θ ik = 90 versus θ ik very small. C.J. Anderson (Illinois) Sample Geometry Spring / 49

35 Variable space Interpretation of GSV In the variable space, we consider the spread of the n points in the p dimensional space around the sample mean Ü = ( x 1, x 2,..., x p ) For this, we need some more definitions/concepts: The statistical distance of a point P = (x 1,x 2,...,x p ) from the origin is the distance of the point P = (x1,x 2,...,x p ) = ( x 1 x 2 x p,,, ) s11 s22 spp in standardized coordinates; that is, D(0,P) = x1 2 +x x 2 p x1 2 = + x2 2 +,+ x2 p s 11 s 22 s C.J. Anderson (Illinois) Sample Geometry pp Spring / 49

36 Constant Statistical Distance All points that have coordinates (x 1,x 2,,x p ) and are a constant (squared) statistical distance from the origin must satisfy x 2 1 s 11 + x2 2 s x2 p s pp = c 2. This is the equation for an ellipsoid with center at the origin (zero) and major and minor axes coinciding with the coordinate axes. e.g., For P = (x 1,x 2 ), (0,c s 22 ) ց ւ P = (x 1,x 2 ) ( c s 11,0) ր տ (c s11,0) (0, c s 22 ) ր C.J. Anderson (Illinois) Sample Geometry Spring / 49

37 Statistical Distance between two Points The statistical distance of a point P = (x 1,x 2 ) from point Q = (y 1,y 2 ) is d(p,q) = a 11 (x 1 y 1 ) 2 +2a 12 (x 1 y 1 )(x 2 y 2 )+a 22 (x 2 y 2 ) 2 where a 11, a 12, and a 22 are some constants. For now, we ll suppose that the variables are correlated (i.e., the variable vectors are not at 90 angles). The square distance d(p,q) 2 is the equation of an ellipse centered at Q: x 2 Q P x 1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

38 Statistical Distance between two Points The statistical distance of a point P = (x 1,x 2 ) from point Q = (y 1,y 2 ) is d(p,q) = a 11 (x 1 y 1 ) 2 +2a 12 (x 1 y 1 )(x 2 y 2 )+a 22 (x 2 y 2 ) 2 where a 11, a 12, and a 22 are some constants. For now, we ll suppose that the variables are correlated (i.e., the variable vectors are not at 90 angles). The square distance d(p,q) 2 is the equation of an ellipse centered at Q: x 2 Q P x 1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

39 Rotation of axes When variable are correlated, we have a rotation of the axes through and angle θ: Namely, x 1 x 1 and x 2 x 2 x 2 x 1 = x 1 cos(θ)+x 2 sin(θ) x 2 = x 1 sin(θ)+x 2 cos(θ) x 2 x 1 θ Q P x 1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

40 Back to Interpretation of GSV The following is an equation of an ellipsoid (Ü Ü) Ë 1 (Ü Ü) = c 2 where Ü is like point P and Ü is like point Q. (A quadratic form) The above is the (squared) statistical distance of points Ü from the point Ü in the p dimensional space that are a constant distance c from Ü. e.g., p = 1 : (x 1 x 1 ) 2 s 11 = c 2 p = 2 : a 11 (x 1 x 1 ) 2 +2a 12 (x 1 x 1 )(x 2 x 2 )+a 22 (x 2 x 2 ) 2 = c 2 where the a 11, a 12 and a 22 depend on ( ) ( ) s11 s Ë = 12 Ë 1 a11 a = 12 s 21 s 22 a 21 a 22 C.J. Anderson (Illinois) Sample Geometry Spring / 49

41 GSV Interpretation The Volume of this ellipsoid is volume{ü : (Ü Ü) Ë 1 (Ü Ü) c 2 } = (constant) Ë 1/2 c p Note: Ë = GSV So the GSV is proportional to the volume of the ellipsoid where the ellipsoid represents statistical distances of observations from the vector of means. C.J. Anderson (Illinois) Sample Geometry Spring / 49

42 When does GSV = 0 Consider matrix of deviations, x 11 x 1 x 12 x 2 x 1p x p x 21 x 1 x 22 x 2 x 2p x p 1 n x = x n1 x 1 x n2 x 2 x np x p GSV = 0 means that at least one column of ( 1 n x) can be expresses as a linear combination of the others. One (or more) of the deviation vectors j lies in the (hyper)plane defined by the others. (Remember of simple example). The GSV is zero if and only if at least one deviation vector lies in the (hyper)plane formed by all linear combinations of the others; i.e., the columns of the matrix 1 n x are linearly dependent. C.J. Anderson (Illinois) Sample Geometry Spring / 49

43 Implications & Limitations of GSV If the (sample size) (number of variables) n p then Ë = GSV = 0 for all samples. e.g., Example where p = 4 and n = 2 More on the meaning & interpretation of GSV that we ll elaborate on: The GSV represents both variability and covariability. You can have very different patterns of variability and association (correlational structure) but have the same value for GSV. C.J. Anderson (Illinois) Sample Geometry Spring / 49

44 Different Patterns S, Same GSV Problem with GSV = Ë is that different patterns of variability and covariability can give the same value for GSV. eg., Ë0 = Ë1 = Ë2 = ( ) Ë0 = ( ) Ë1 = ( ) Ë2 = Part of the problem is that GSV reflects both variability and covariability. C.J. Anderson (Illinois) Sample Geometry Spring / 49

45 The GSV: variances and covariances Even if the strength of the linear relationship between variables is constant (i.e., covariance) the GSV could be larger simply be increasing s ii s. larger parellepiped (ellipsoid) larger volume larger GSV. If you want a measure that reflects only the covariability, then first standardize all of the variables such that they have the same length (i.e., variance): x ji = x ji x i sii The sample covariance matrix for x ji s (i.e., Ë ) is the sample correlation matrix Ë = Ê. GSV of standardized variables = Ê = det(ê). The residual/deviation vectors of the x s all have length C.J. Anderson (Illinois) Sample Geometry Spring / 49

46 Proof that L d i = n 1 L 2 i = i i = = ( (x1i x i ), (x 2i x i ),, (x ) ni x i ) sii sii sii n (x ji x i ) 2 j=1 s ii = 1 n (x ji x i ) 2 s ii = 1 n 1 j=1 1 n n j=1 (x (x ji x i ) 2 ji x i ) 2 = n 1 (x 1i x i) sii (x 2i x i) sii. (x ni x i) sii j=1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

47 GSV using R Ê focuses only on angles between the i s. When angles= 90, the i s are perpendicular and Ê reaches a maximum: r ik = cos(90)/1 = 0 and Ê = 1. When angles= 0, the i go in the same direction and Ê reaches a minimum: r ik = cos(0)/1 = ±1 and Ê = 0 e.g., Ê0 = Ê1 = Ê2 = Ê3 = ( ) Ê0 = ( ) Ê1 = ( ) Ê2 = ( ) Ê3 = C.J. Anderson (Illinois) Sample Geometry Spring / 49

48 Relationship between det(s) and det(r) The two ways of computing the GSV (one on unstandardized and the other on standardized variables) are functionally related: Ë = (s 11 s 22 s pp ) Ê = p s ii R This emphasizes the fact that Ë depends on the s ii s but Ê doesn t. e.g., and i=1 Ë1 = 9 = (5)(5)0.36 = 9 Ë0 = (3)(3) Ê = (3)(3)(1) = 9 C.J. Anderson (Illinois) Sample Geometry Spring / 49

49 Another Global Statistic While GSV = Ê focuses just on covariability, there is another measure that focuses just summarizing the variance in Ë: Total Sample Variance = p s ii = trace(ë) = tr(ë) We ll use this one later when we talk about principal components analysis. i=1 C.J. Anderson (Illinois) Sample Geometry Spring / 49

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,

More information

Inferences about a Mean Vector

Inferences about a Mean Vector Inferences about a Mean Vector Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Comparisons of Two Means Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Combinations of Variables Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

The geometry of least squares

The geometry of least squares The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),

More information

Scatter Plot Quadrants. Setting. Data pairs of two attributes X & Y, measured at N sampling units:

Scatter Plot Quadrants. Setting. Data pairs of two attributes X & Y, measured at N sampling units: Geog 20C: Phaedon C Kriakidis Setting Data pairs of two attributes X & Y, measured at sampling units: ṇ and ṇ there are pairs of attribute values {( n, n ),,,} Scatter plot: graph of - versus -values in

More information

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality Orthonormal Bases, Orthogonal Matrices The Major Ideas from Last Lecture Vector Span Subspace Basis Vectors Coordinates in different bases Matrix Factorization (Basics) The Major Ideas from Last Lecture

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

An Introduction to Multivariate Methods

An Introduction to Multivariate Methods Chapter 12 An Introduction to Multivariate Methods Multivariate statistical methods are used to display, analyze, and describe data on two or more features or variables simultaneously. I will discuss multivariate

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Principal Analysis Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Lecture 2: Vector-Vector Operations

Lecture 2: Vector-Vector Operations Lecture 2: Vector-Vector Operations Vector-Vector Operations Addition of two vectors Geometric representation of addition and subtraction of vectors Vectors and points Dot product of two vectors Geometric

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Section 1.8/1.9. Linear Transformations

Section 1.8/1.9. Linear Transformations Section 1.8/1.9 Linear Transformations Motivation Let A be a matrix, and consider the matrix equation b = Ax. If we vary x, we can think of this as a function of x. Many functions in real life the linear

More information

An introduction to multivariate data

An introduction to multivariate data An introduction to multivariate data Angela Montanari 1 The data matrix The starting point of any analysis of multivariate data is a data matrix, i.e. a collection of n observations on a set of p characters

More information

Contents. Physical Properties. Scalar, Vector. Second Rank Tensor. Transformation. 5 Representation Quadric. 6 Neumann s Principle

Contents. Physical Properties. Scalar, Vector. Second Rank Tensor. Transformation. 5 Representation Quadric. 6 Neumann s Principle Physical Properties Contents 1 Physical Properties 2 Scalar, Vector 3 Second Rank Tensor 4 Transformation 5 Representation Quadric 6 Neumann s Principle Physical Properties of Crystals - crystalline- translational

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide

More information

Stat 206: Sampling theory, sample moments, mahalanobis

Stat 206: Sampling theory, sample moments, mahalanobis Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because

More information

STAT 501 Assignment 1 Name Spring Written Assignment: Due Monday, January 22, in class. Please write your answers on this assignment

STAT 501 Assignment 1 Name Spring Written Assignment: Due Monday, January 22, in class. Please write your answers on this assignment STAT 5 Assignment Name Spring Reading Assignment: Johnson and Wichern, Chapter, Sections.5 and.6, Chapter, and Chapter. Review matrix operations in Chapter and Supplement A. Examine the matrix properties

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

A Geometric Review of Linear Algebra

A Geometric Review of Linear Algebra A Geometric Reiew of Linear Algebra The following is a compact reiew of the primary concepts of linear algebra. The order of presentation is unconentional, with emphasis on geometric intuition rather than

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Random Intercept Models

Random Intercept Models Random Intercept Models Edps/Psych/Soc 589 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline A very simple case of a random intercept

More information

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES Math 46, Spring 2 More on Algebraic and Geometric Properties January 2, 2 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES Algebraic properties Algebraic properties of matrix/vector multiplication Last time

More information

Response Surface Methodology III

Response Surface Methodology III LECTURE 7 Response Surface Methodology III 1. Canonical Form of Response Surface Models To examine the estimated regression model we have several choices. First, we could plot response contours. Remember

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Notes on multivariable calculus

Notes on multivariable calculus Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

More information

Physical Properties. Reading Assignment: 1. J. F. Nye, Physical Properties of Crystals -chapter 1

Physical Properties. Reading Assignment: 1. J. F. Nye, Physical Properties of Crystals -chapter 1 Physical Properties Reading Assignment: 1. J. F. Nye, Physical Properties of Crystals -chapter 1 Contents 1 Physical Properties 2 Scalar, Vector 3 Second Rank Tensor 4 Transformation 5 Representation Quadric

More information

Math Camp. Justin Grimmer. Associate Professor Department of Political Science Stanford University. September 9th, 2016

Math Camp. Justin Grimmer. Associate Professor Department of Political Science Stanford University. September 9th, 2016 Math Camp Justin Grimmer Associate Professor Department of Political Science Stanford University September 9th, 2016 Justin Grimmer (Stanford University) Methodology I September 9th, 2016 1 / 61 Where

More information

Span and Linear Independence

Span and Linear Independence Span and Linear Independence It is common to confuse span and linear independence, because although they are different concepts, they are related. To see their relationship, let s revisit the previous

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Serial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology

Serial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology Serial Correlation Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 017 Model for Level 1 Residuals There are three sources

More information

Classification 2: Linear discriminant analysis (continued); logistic regression

Classification 2: Linear discriminant analysis (continued); logistic regression Classification 2: Linear discriminant analysis (continued); logistic regression Ryan Tibshirani Data Mining: 36-462/36-662 April 4 2013 Optional reading: ISL 4.4, ESL 4.3; ISL 4.3, ESL 4.4 1 Reminder:

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 3 for Applied Multivariate Analysis Outline 1 Reprise-Vectors, vector lengths and the angle between them 2 3 Partial correlation

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

Homoskedasticity. Var (u X) = σ 2. (23)

Homoskedasticity. Var (u X) = σ 2. (23) Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This

More information

Econ 2120: Section 2

Econ 2120: Section 2 Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Vectors Summary. can slide along the line of action. not restricted, defined by magnitude & direction but can be anywhere.

Vectors Summary. can slide along the line of action. not restricted, defined by magnitude & direction but can be anywhere. Vectors Summary A vector includes magnitude (size) and direction. Academic Skills Advice Types of vectors: Line vector: Free vector: Position vector: Unit vector (n ): can slide along the line of action.

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Matrices and Deformation

Matrices and Deformation ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13 - Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates

More information

Principal Components Analysis

Principal Components Analysis Principal Components Analysis Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 16-Mar-2017 Nathaniel E. Helwig (U of Minnesota) Principal

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

1. Introduction to Multivariate Analysis

1. Introduction to Multivariate Analysis 1. Introduction to Multivariate Analysis Isabel M. Rodrigues 1 / 44 1.1 Overview of multivariate methods and main objectives. WHY MULTIVARIATE ANALYSIS? Multivariate statistical analysis is concerned with

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Course Name : Physics I Course # PHY 107

Course Name : Physics I Course # PHY 107 Course Name : Physics I Course # PHY 107 Lecture-2 : Representation of Vectors and the Product Rules Abu Mohammad Khan Department of Mathematics and Physics North South University http://abukhan.weebly.com

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Metric-based classifiers. Nuno Vasconcelos UCSD

Metric-based classifiers. Nuno Vasconcelos UCSD Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major

More information

4.7 Confidence and Prediction Intervals

4.7 Confidence and Prediction Intervals 4.7 Confidence and Prediction Intervals Instead of conducting tests we could find confidence intervals for a regression coefficient, or a set of regression coefficient, or for the mean of the response

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2 MATH 12 CLASS 2 NOTES, SEP 23 2011 Contents 1. Dot product: definition, basic properties 1 2. Dot product: determining the angle between two vectors 2 Quick links to definitions/theorems Dot product definition

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

The product of matrices

The product of matrices Geometria Lingotto LeLing9: Matrix multiplication C ontents: The product of two matrices Proof of the Rank theorem The algebra of square matrices: the product is not commutative Rotations and elementary

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

7. The Multivariate Normal Distribution

7. The Multivariate Normal Distribution of 5 7/6/2009 5:56 AM Virtual Laboratories > 5. Special Distributions > 2 3 4 5 6 7 8 9 0 2 3 4 5 7. The Multivariate Normal Distribution The Bivariate Normal Distribution Definition Suppose that U and

More information

7 : APPENDIX. Vectors and Matrices

7 : APPENDIX. Vectors and Matrices 7 : APPENDIX Vectors and Matrices An n-tuple vector x is defined as an ordered set of n numbers. Usually we write these numbers x 1,...,x n in a column in the order indicated by their subscripts. The transpose

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Fact: Every matrix transformation is a linear transformation, and vice versa.

Fact: Every matrix transformation is a linear transformation, and vice versa. Linear Transformations Definition: A transformation (or mapping) T is linear if: (i) T (u + v) = T (u) + T (v) for all u, v in the domain of T ; (ii) T (cu) = ct (u) for all scalars c and all u in the

More information

The Standard Linear Model: Hypothesis Testing

The Standard Linear Model: Hypothesis Testing Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:

More information

15. LECTURE 15. I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one.

15. LECTURE 15. I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one. 5. LECTURE 5 Objectives I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one. In the last few lectures, we ve learned that

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Dimensionality Reduction and Principal Components

Dimensionality Reduction and Principal Components Dimensionality Reduction and Principal Components Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,..., M} and observations of X

More information

Models for Clustered Data

Models for Clustered Data Models for Clustered Data Edps/Psych/Soc 589 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline Notation NELS88 data Fixed Effects ANOVA

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Multiple Regression Model: I

Multiple Regression Model: I Multiple Regression Model: I Suppose the data are generated according to y i 1 x i1 2 x i2 K x ik u i i 1...n Define y 1 x 11 x 1K 1 u 1 y y n X x n1 x nk K u u n So y n, X nxk, K, u n Rks: In many applications,

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study

More information

Models for Clustered Data

Models for Clustered Data Models for Clustered Data Edps/Psych/Stat 587 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline Notation NELS88 data Fixed Effects ANOVA

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

Course Notes Math 275 Boise State University. Shari Ultman

Course Notes Math 275 Boise State University. Shari Ultman Course Notes Math 275 Boise State University Shari Ultman Fall 2017 Contents 1 Vectors 1 1.1 Introduction to 3-Space & Vectors.............. 3 1.2 Working With Vectors.................... 7 1.3 Introduction

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin:

Finish section 3.6 on Determinants and connections to matrix inverses. Use last week's notes. Then if we have time on Tuesday, begin: Math 225-4 Week 7 notes Sections 4-43 vector space concepts Tues Feb 2 Finish section 36 on Determinants and connections to matrix inverses Use last week's notes Then if we have time on Tuesday, begin

More information