Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
|
|
- Elfrieda Ferguson
- 6 years ago
- Views:
Transcription
1 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
2 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters = vectors: x, y, β. Boldface, uppercase letters = matrices: A, X, Σ. opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
3 The Product of a Scalar and a Matrix Suppose A is a matrix with m rows and n columns. Let a ij R be the element or entry in the ith row and jth column of A. We convey all this information with the notation A = [a m n ij ]. For any c R, ca = Ac = [ca ij ]; i.e., the product of the scalar c and the matrix A = [a ij ] is the matrix whose entry in the ith row and j column is c times a ij for each i = 1,..., m and j = 1,..., n. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
4 Vector and Vector Transpose In STAT 510, a vector is a matrix with one column: x = x 1. x n. In STAT 510, we use x to denote the transpose of the vector x: x = [x 1,..., x n ]; i.e., x is a matrix with one column and x is the matrix with the same entries as x but written as a row rather than a column. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
5 Transpose of a Matrix Suppose A is an m n matrix. Then we may write A as [a 1,..., a n ], where a i is the ith column of A for each i = 1,..., n. The transpose of the matrix A is A = [a 1,..., a n ] = a 1. a n. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
6 Matrix Multiplication Suppose m n A = [a ij ] and B = [b n k ij ]. Then m n A B n k = C = m k [ c ij = ] n a il b lj. l=1 opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
7 Matrix Multiplication Special Cases If a = a 1. a n and b = b 1. b n, then a b = n a i b i. i=1 Also, a a = n a 2 i a 2. i=1 a a a = n a 2 i is known as the Euclidean norm of a. i=1 Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
8 Another Look at Matrix Multiplication Suppose and m n A = [a ij ] = [a 1,..., a n ] = B = [b n k ij ] = [b 1,..., b k ] = a (1). a (m) b (1). b (n). Then m n A B n k = C = m k [ c ij = ] n a il b lj = [c ij = a (i) b j] l=1 a (1) B = [Ab 1,..., Ab k ] =. = a (m) B n a l b (l). l=1 Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
9 Transpose of a Matrix Product The transpose of a matrix product is a product of the transposes in reverse order; i.e., (AB) = B A. opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
10 Rank and Trace The rank of a matrix A is written as rank(a) and is the maximum number of linearly independent rows (or columns) of A The trace of an n n matrix A is written at trace(a) and is the sum of the diagonal elements of A; i.e., trace(a) = n a ii. i=1 Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
11 Idempotent Matrices A matrix A is said to be idempotent if AA = A. The rank of an idempotent matrix is equal to its trace; i.e., rank(a) = trace(a). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
12 Linear Combinations and Column Spaces Ab is a linear combination of the columns of an m n matrx A: Ab = [a 1,..., a n ] b 1. b n = b 1a b n a n. The set of all possible linear combinations of the columns of A is called the column space of A and is written as C(A) = {Ab : b R n }. Note that C(A) R m. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
13 Inverse of a Matrix An m n matrix A is said to be square if m = n. A square matrix A is nonsingular or invertible if there exists a square matrix B such that AB = I. If A is nonsingular and AB = I, then B is the unique inverse of A and is written as A 1. For a nonsingular matrix A, we have AA 1 = I. (It is also true that A 1 A = I.) A square matrix without an inverse is called singular. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
14 Generalized Inverses G is a generalized inverse of an m n matrix A if AGA = A. We usually denote a generalized inverse of A by A. If A is nonsingular, i.e., if A 1 exists, then A 1 is the one and only generalized inverse of A. AA 1 A = IA = AI = A If A is singular, i.e., if A 1 does not exist, then there are infinitely many generalized inverses of A. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
15 Finding a Generalized Inverse of a Matrix A 1 Find any r r nonsingular submatrix of A where r = rank(a). Call this matrix W. 2 Invert and transpose W, i.e., compute (W 1 ). 3 Replace each element of W in A with the corresponding element of (W 1 ). 4 Replace all other elements in A with zeros. 5 Transpose the resulting matrix to obtain G, a generalized inverse for A. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
16 Positive and Non-Negative Definite Matrices x Ax is known as a quadratic form. We say that an n n matrix A is positive definite (PD) iff A is symmetric (i.e., A = A ), and x Ax > 0 for all x R n \ {0}. We say that an n n matrix A is non-negative definite (NND) iff A is symmetric (i.e., A = A ), and x Ax 0 for all x R n. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
17 Positive and Non-Negative Definite Matrices A matrix that is positive definite is nonsingular; i.e., A positive definite = A 1 exists. A matrix that is non-negative definite but not positive definite is singular. opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
18 Random Vectors A random vector is a vector whose components are random variables. y = y 1 y 2. y n opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
19 Expected Value of a Random Vector The expected value, or mean, of a random vector y is the vector of expected values of the components of y. y = E(y 1 ) E(y = E(y) = 2 ).. y 1 y 2 y n E(y n ) Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
20 Variance of a Random Vector The variance of a random vector y = [y 1, y 2,..., y n ] is the matrix whose i, jth element is Cov(y i, y j ) (i, j {1,..., n}). Cov(y 1, y 1 ) Cov(y 1, y 2 ) Cov(y 1, y n ) Cov(y Var(y) = 2, y 1 ) Cov(y 2, y 2 ) Cov(y 2, y n ) Cov(y n, y 1 ) Cov(y n, y 2 ) Cov(y n, y n ) Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
21 Variance of a Random Vector The covariance of a random variable with itself is the variance of that random variable. Thus, Var(y 1 ) Cov(y 1, y 2 ) Cov(y 1, y n ) Cov(y Var(y) = 2, y 1 ) Var(y 2 ) Cov(y 2, y n ) Cov(y n, y 1 ) Cov(y n, y 2 ) Var(y n ) Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
22 Covariance Between Two Random Vectors The covariance between random vectors u = [u 1,..., u m ] and v = [v 1,..., v n ] is the matrix whose i, jth element is Cov(u i, v j ) (i {1,..., m}, j {1,..., n}). Cov(u 1, v 1 ) Cov(u 1, v 2 ) Cov(u 1, v n ) Cov(u Cov(u, v) = 2, v 1 ) Cov(u 2, v 2 ) Cov(u 2, v n )... Cov(u m, v 1 ) Cov(u m, v 2 ) Cov(u m, v n ) = E(uv ) E(u)E(v ). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
23 Linear Transformation of a Random Vector If y is an n 1 random vector, A is an m n matrix of constants, and b is an m 1 vector of constants, then Ay + b is a linear transformation of the random vector y. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
24 Mean, Variance, and Covariance of Linear Transformations of a Random Vector y E(Ay + b) = AE(y) + b Var(Ay + b) = AVar(y)A Cov(Ay + b, Cy + d) = AVar(y)C opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
25 Standard Multivariate Normal Distributions If z 1,..., z n iid N(0, 1), then z = z 1. z n has a standard multivariate normal distribution: z N(0, I). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
26 Multivariate Normal Distributions Suppose z is an n 1 standard multivariate normal random vector, i.e., z N(0, I n n ). Suppose A is an m n matrix of constants and µ is an m 1 vector of constants. Then Az + µ has a multivariate normal distribution with mean µ and variance AA : z N(0, I) = Az + µ N(µ, AA ). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
27 Multivariate Normal Distributions If µ is an m 1 vector of constants and Σ is a m m symmetric, non-negative definite (NND) matrix of rank n, then N(µ, Σ) signifies the multivariate normal distribution with mean µ and variance Σ. If y N(µ, Σ), then y d = Az + µ, where z N(0, I n n ) and A is an m n matrix of rank n such that AA = Σ. opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
28 Linear Transformations of Multivariate Normal Distributions are Multivariate Normal y N(µ, Σ) = y = d Az + µ, z N(0, I), AA = Σ = Cy + d = d C(Az + µ) + d = Cy + d = d CAz + Cµ + d = Cy + d = d Mz + u, M CA, u Cµ + d = Cy + d N(u, MM ). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
29 Non-Central Chi-Square Distributions If y N(µ, I n n ), then w y y = n i=1 y 2 i has a non-central chi-square distribution with n degrees of freedom and non-centrality parameter µ µ/2: w χ 2 n(µ µ/2). (Some define the non-centrality parameter as µ µ rather than µ µ/2.) opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
30 Central Chi-Square Distributions If z N(0, I n n ), then w z z = n i=1 z 2 i has a central chi-square distribution with n degrees of freedom: w χ 2 n. A central chi-square distribution is a non-central chi-square distribution with non-centrality parameter 0: w χ 2 n(0). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
31 Important Distributional Result about Quadratic Forms Suppose Σ is an n n positive definite matrix. Suppose A is an n n symmetric matrix of rank m such that AΣ is idempotent (i.e., AΣAΣ = AΣ). Then y N(µ, Σ) = y Ay χ 2 m(µ Aµ/2). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
32 Mean and Variance of Chi-Square Distributions If w χ 2 m(θ), then E(w) = m + 2θ and Var(w) = 2m + 8θ. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
33 Non-Central t Distributions Suppose y N(δ, 1). Suppose w χ 2 m. Suppose y and w are independent. Then y/ w/m has a non-central t distribution with m degrees of freedom and non-centrality parameter δ: y w/m t m (δ). opyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
34 Central t Distributions Suppose z N(0, 1). Suppose w χ 2 m. Suppose z and w are independent. Then z/ w/m has a central t distribution with m degrees of freedom: z w/m t m. The distribution t m is the same as t m (0). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
35 Non-Central F Distributions Suppose w 1 χ 2 m 1 (θ). Suppose w 2 χ 2 m 2. Suppose w 1 and w 2 are independent. Then (w 1 /m 1 )/(w 2 /m 2 ) has a non-central F distribution with m 1 numerator degrees of freedom, m 2 denominator degrees of freedom, and non-centrality parameter θ: w 1 /m 1 w 2 /m 2 F m1,m 2 (θ). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
36 Central F Distributions Suppose w 1 χ 2 m 1. Suppose w 2 χ 2 m 2. Suppose w 1 and w 2 are independent. Then (w 1 /m 1 )/(w 2 /m 2 ) has a central F distribution with m 1 numerator degrees of freedom and m 2 denominator degrees of freedom: w 1 /m 1 w 2 /m 2 F m1,m 2 (which is the same as the F m1,m 2 (0) distribution). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
37 Relationship between t and F Distributions If u t m (δ), then u 2 F 1,m (δ 2 /2). Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
38 Some Independence ( ) Results Suppose y N(µ, Σ), where Σ is an n n PD matrix. If A 1 is an n 1 n matrix of constants and A 2 is an n 2 n matrix of constants, then A 1 ΣA 2 = 0 = A 1 y A 2 y. If A 1 is an n 1 n matrix of constants and A 2 is an n n symmetric matrix of constants, then A 1 ΣA 2 = 0 = A 1 y y A 2 y. If A 1 and A 2 are n n symmetric matrices of constants, then A 1 ΣA 2 = 0 = y A 1 y y A 2 y. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More information3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43
3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationMiscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationDistributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31
Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationLinear Algebra Review
Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and
More informationThe Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2
More informationML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58
ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.
More informationMatrix Approach to Simple Linear Regression: An Overview
Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationCh4. Distribution of Quadratic Forms in y
ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationEstimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17
Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationGeneral Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35
General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More informationDeterminants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25
Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationb 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n
Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationMATRICES AND MATRIX OPERATIONS
SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n)
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationidentity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij
Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationFall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop
Fall 2017 Inverse of a matrix Authors: Alexander Knop Institute: UC San Diego Row-Column Rule If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationMATRICES AND ITS APPLICATIONS
MATRICES AND ITS Elementary transformations and elementary matrices Inverse using elementary transformations Rank of a matrix Normal form of a matrix Linear dependence and independence of vectors APPLICATIONS
More informationANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32
ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach
More information2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.
2.1 Matrices Reminder: A matrix with m rows and n columns has size m x n. (This is also sometimes referred to as the order of the matrix.) The entry in the ith row and jth column of a matrix A is denoted
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationChapter 6. Orthogonality
6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationSTAT 714 LINEAR STATISTICAL MODELS
STAT 714 LINEAR STATISTICAL MODELS Fall, 2011 Lecture Notes Instructor: Ian Dryden Based on the original notes by Joshua M Tebbs Department of Statistics The University of South Carolina CHAPTER 0 STAT
More informationSampling Distributions
Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationApplied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University
Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b
More informationMatrices. In this chapter: matrices, determinants. inverse matrix
Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a
More informationECON 186 Class Notes: Linear Algebra
ECON 86 Class Notes: Linear Algebra Jijian Fan Jijian Fan ECON 86 / 27 Singularity and Rank As discussed previously, squareness is a necessary condition for a matrix to be nonsingular (have an inverse).
More informationLecture 15: Multivariate normal distributions
Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationELEMENTS OF MATRIX ALGEBRA
ELEMENTS OF MATRIX ALGEBRA CHUNG-MING KUAN Department of Finance National Taiwan University September 09, 2009 c Chung-Ming Kuan, 1996, 2001, 2009 E-mail: ckuan@ntuedutw; URL: homepagentuedutw/ ckuan CONTENTS
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationa11 a A = : a 21 a 22
Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are
More informationConstraints on Solutions to the Normal Equations. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41
Constraints on Solutions to the Normal Equations Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 If rank( n p X) = r < p, there are infinitely many solutions to the NE X Xb
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationSOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES. (2π) 1 2 (σ 2 ) πσ
SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES 1. THE MULTIVARIATE NORMAL DISTRIBUTION The n 1 vector of random variables, y, is said to be distributed as a multivariate normal with mean vector
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationLinear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34
Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationThe purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.
Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That
More informationORIE 6300 Mathematical Programming I August 25, Recitation 1
ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)
More information3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.
3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)
More information2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28
2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More informationFinite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.
Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationMATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra
MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra A. Vectors A vector is a quantity that has both magnitude and direction, like velocity. The location of a vector is irrelevant;
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationMath Camp Notes: Linear Algebra I
Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n
More information文件中找不不到关系 ID 为 rid3 的图像部件. Review of LINEAR ALGEBRA I. TA: Yujia Xie CSE 6740 Computational Data Analysis. Georgia Institute of Technology
文件中找不不到关系 ID 为 rid3 的图像部件 Review of LINEAR ALGEBRA I TA: Yujia Xie CSE 6740 Computational Data Analysis Georgia Institute of Technology 1. Notations A R $ & : a matrix with m rows and n columns, where
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More information5: MULTIVARATE STATIONARY PROCESSES
5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability
More information11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49
11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R
More information