Vectors and Matrices Statistics with Vectors and Matrices

Size: px
Start display at page:

Download "Vectors and Matrices Statistics with Vectors and Matrices"

Transcription

1 Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55

2 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc iml; Chapter ). Today s Lecture Last Time Statistics of Vectors and Matrices (Beginning of Chapter 3, section 3.5). Lecture #3-9/7/005 Slide of 55

3 Last Time Today s Lecture Last Time Basics of matrix algebra: Matrices and matrix types (e.g., symmetric, diagonal, etc...). Vectors. Scalars. Basic matrix operations: Transpose. Addition/Subtraction. Matrix Multiplication. Scalar Multiplication. Basic matrix entities: Identity matrix. Zero matrix. Lecture #3-9/7/005 Slide 3 of 55

4 Vector Geometry Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections Recall that a vector can be thought of as a line (with a direction) emanating from the origin and terminating at a point. For instance, take the column vector: [ ] 3 x = 4 x can be displayed as: x Lecture #3-9/7/005 Slide 4 of 55

5 Scalar Multiplication Scalar multiplication of a vector results in changing the length of the vector: x x x Lecture #3-9/7/005 Slide 5 of 55

6 Linear Combinations Vectors can be combined by adding multiples: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections y = a 1 x 1 + a x a k x k The resulting vector, y, is called a linear combination. All for k vectors, the set of all possible linear combinations is called their span. Lecture #3-9/7/005 Slide 6 of 55

7 Linear Combinations Geometrically, the linear combination of a vector looks like: [ ] [ ] 3 x = y = x+y 4 3 x 1 y Lecture #3-9/7/005 Slide 7 of 55

8 Linear Dependencies A set of vectors are said to be linearly dependent if a 1, a,...,a k exist, and: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections a 1 x 1 + a x a k x k = 0. a 1, a,...,a k are not all zero. Such linear dependencies occur when a linear combination is added to the vector set. Matrices comprised of a set of linearly dependent vectors are singular. A set of linearly independent vectors forms what is called a basis for the vector space. Any vector in the vector space can then be expressed as a linear combination of the basis vectors. Lecture #3-9/7/005 Slide 8 of 55

9 Example Basis Vectors The following two vectors form a basis for the cartesian coordinate system: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections u 1 = [ 1 0 ] u = All possible points on the graph can be represented by linear combinations of u 1 and u From our previous example: [ 0 1 ] x = 3u 1 + 4u y = u 1 + 1u This hints at a major part of multivariate analysis: vector and matrix decomposition. Lecture #3-9/7/005 Slide 9 of 55

10 Vector Length The length of a vector emanating from the origin is given by the Pythagorean formula: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections Lx = The length of x = = 5 x 1 + x x k = x x This is found by using forming a right triangle of scalar multiples of basis vectors: Lx 4 u 3 u 1 Lecture #3-9/7/005 Slide 10 of 55

11 Inner Product The inner (or dot) product of two vectors x and y is the sum of element-by-element multiplication: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections From our example: x y = x y = x 1 y 1 + x y x k y k [ 3 4 ] [ 1 ] = = 10 The inner product is used to compute the angle between vectors... Lecture #3-9/7/005 Slide 11 of 55

12 Vector Angle The angle formed between two vectors x and y is Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections 5 4 cos(θ) = x y x x y y If x y = 0, vectors x and y are perpendicular, as noted by x y 3 x 1 θ y Lecture #3-9/7/005 Slide 1 of 55

13 Vector Angle From our example: cos(θ) = x y x x y y = =.58 Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections θ = cos 1 (.58) = 6.6 o All basis vectors are perpendicular. For example: u 1u = 0 Lecture #3-9/7/005 Slide 13 of 55

14 Vector Projections The projection of a vector x onto a vector y is given by: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections x y y y y = x y L y y From our example the projection of x onto y is: x y y y y = 10 5 y = y = [ 4 ] Lecture #3-9/7/005 Slide 14 of 55

15 Vector Projections From our example the projection of x onto y is: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections x y y y y = 10 5 y = y = [ 4 ] x Proj x,y 1 y Lecture #3-9/7/005 Slide 15 of 55

16 Vector Projections From our example the projection of y onto x is: Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections x y y y y = 10 5 x = [ ] x Proj y,x 1 y Lecture #3-9/7/005 Slide 16 of 55

17 Vector Projections The length of vector projections is Lx cos(θ). Vector Geometry Scalar Multiplication Linear Combinations Linear Dependencies Vector Length Inner Product Angle Between Vectors Vector Projections Vector projections are the root of hypothesis tests for the general linear model (ANOVA and Multiple Regression). Through such projections, a set of linear independent vectors can be created from any set of vectors. One process used to create such vectors is through the Gram-Schmidt Process. Creating linearly independent vectors is useful in multivariate statistics (e.g., a work-around for collinearity). Lecture #3-9/7/005 Slide 17 of 55

18 Matrix Division : The Inverse Recall from basic math that: a b = 1 b a = b 1 a Division Singular Matrices And that: a a = 1 Matrix inverses are just like division in basic math. Lecture #3-9/7/005 Slide 18 of 55

19 The Inverse For a square matrix, an inverse matrix is simply the matrix that when pre-multiplied or post-multiplied with another matrix produces the identity matrix: Division Singular Matrices A 1 A = AA 1 = I Matrix inverse calculation is complicated and unnecessary since computers are much more efficient at finding inverses of matrices. One point of emphasis: just like in regular division, division by zero is undefined. By analogy - not all matrices can be inverted. Matrices that are comprised of linearly dependent column vectors cannot be inverted. Lecture #3-9/7/005 Slide 19 of 55

20 SAS Instead of learning how to invert a matrix from Supplement.1, let me teach you: Division Singular Matrices proc iml; reset print; a={3 6, 4 -}; a_inv = inv(a); quit; Lecture #3-9/7/005 Slide 0 of 55

21 Singular Matrices A matrix that cannot be inverted is called a singular matrix. Division Singular Matrices In statistics, common causes of singular matrices are found by linear dependence among the rows or columns of a square matrix. Linear dependence can be cause by combinations of variables, or by variables with extreme correlations (either near 1.00 or -1.00). For example: a={3 6, 9 18}; a_inv = inv(a); Gives: 8 a={3 6, 9 18}; 9 a_inv = inv(a); ERROR: (execution) Matrix should be non-singular. operation : INV at line 9 column 16 operands : A Lecture #3-9/7/005 Slide 1 of 55

22 Singular Matrices Division Singular Matrices Diagonal matrices have easy-to-compute inverses: a A = 0 a a 33 A 1 = 1 a a a 33 Lecture #3-9/7/005 Slide of 55

23 The following are some algebraic properties of matrices: (A + B) + C = A + (B + C) - Associative A + B = B + A - Commutative c(a + B) = ca + cb - Distributive (c + d)a = ca + da (A + B) = A + B (cd)a = c(da) (ca) = ca Lecture #3-9/7/005 Slide 3 of 55

24 The following are more algebraic properties of matrices: c(ab) = (ca)b A(BC) = (AB)C A(B + C) = AB + AC (B + C)A = BA + CA (AB) = B A For x j such that Ax j is defined: n n Ax j = A j=1 j=1 x j n (Ax j )(Ax j ) = A j=1 j=1 n x j x j A Lecture #3-9/7/005 Slide 4 of 55

25 Advanced Matrix Functions/Operations We end our matrix discussion with some advanced topics. All of these topics form the foundation of multivariate analyses. Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces Lecture #3-9/7/005 Slide 5 of 55

26 Matrix Orthogonality A square matrix (Q) is said to be orthogonal if: QQ = Q Q = I Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces Orthogonal matrices are characterized by two properties: 1. The product of all row vector multiples is the zero matrix (perpendicular vectors).. For each row vector, the sum of all elements is one. Lecture #3-9/7/005 Slide 6 of 55

27 Eigenvalues and Eigenvectors A square matrix can be decomposed into a set of eigenvalues and eigenvectors if: Ax = λx Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces From a statistical standpoint: Principal components are comprised of linear combination of a set of variables weighed by the eigenvectors. The eigenvalues represent the proportion of variance accounted for by specific principal components. Each principal component is orthogonal to the next, producing a set of uncorrelated variables that may be used for statistical purposes (such as in multiple regression). SAS Example #1... Lecture #3-9/7/005 Slide 7 of 55

28 Spectral Decompositions Imagine that a matrix A is of size k k. A then has: k eigenvalues: λ i, i = 1,...,k. k eigenvectors: e i, i = 1,...,k (each of size k 1. Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces A can be expressed by: A = k λ i e i e i i=1 This expression is called the Spectral Decomposition, where A is decomposed into k parts. SAS Example #... Lecture #3-9/7/005 Slide 8 of 55

29 Quadratic Forms A quadratic form of a matrix A is given by: Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces x Ax A symmetric matrix A (of size k k) is said to be positive definite if, for all x 0: 0 x Ax This implies: All eigenvalues of A are greater than zero. The determinant of A is greater than zero (to be discussed). The trace of A is greater than zero (to be discussed). A is non-singular (invertible). Quadratic forms will be discussed in more detail when we introduce the concept of statistical distance. Lecture #3-9/7/005 Slide 9 of 55

30 Square Root Matrices The square root matrix of a positive definite matrix A can be formed by: Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces A 1/ = k λi e i e i = Pλ 1/ P i=1 The square root matrix has the following properties: (A 1/ ) = A 1/ ) - A 1/ is symmetric. A 1/ A 1/ = A. A 1/ = k i=1 1 λi e i e i = Pλ 1/ P A 1/ A 1/ = A 1/ A 1/ = I A 1/ A 1/ = A 1 Lecture #3-9/7/005 Slide 30 of 55

31 Matrix Determinants A square matrix can be characterized by a scalar value called a determinant. Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces det A = A Much like the matrix inverse, calculation of the determinant is very complicated and tedious, and is best left to computers. What can be learned from determinants is if a matrix is singular. Matrices with determinants that are greater than zero are said to be positive definite, a byproduct of which is that a positive matrix is non-singular. A = k i=1 λ i SAS example #3... Lecture #3-9/7/005 Slide 31 of 55

32 Matrix Traces For a square matrix A (size k k), the trace of a matrix is defined as the sum of the diagonal elements: Orthogonality Eigenspaces Decompositions Quadratic Forms Square Root Matrices Determinants Traces SAS example #4... tr(a) = k a ii = i=1 k i=1 λ ii Lecture #3-9/7/005 Slide 3 of 55

33 Random Variables Random Variable is a variable whose outcome depends on the result of a chance experiment. Can be continuous or discrete (most of our statistics deal with continuous variables). Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Has a density function f(x) that indicates relative frequency of occurrence. So we will discuss the basic ways that are used to summarize the random variable x. Lecture #3-9/7/005 Slide 33 of 55

34 Mean Mean is a measure of central tendency. Expectation (E(x)) - describes the population mean. E(x) = µ x Mean ( x) is the sample mean. Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Will never equal the population mean. x = 1 n n 1). n i=1 x i = 1 x 1 (1 is a column vector of ones, size n The sampling distribution of x has mean of µ x and variance σ x n, and, as n increases, the distribution converges to a normal distribution. Lecture #3-9/7/005 Slide 34 of 55

35 Variance Variance is a measure of spread/range. Population Variance (σ x) - describes the population variance E((x µ x ) ) = σ x Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Variance (s x) is the sample variance. Will never equal the population variance s x = 1 n (x i x) = 1 n n (x x1) (x x1). i=1 Lecture #3-9/7/005 Slide 35 of 55

36 Linear Combination of x Lets looks at what happens to mean if compute a new variable ax Mean is a measure of central tendency. E(ax) = ae(x) Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) ax = a x Variance is a measure of spread/range. E((ax aµ x ) ) = a σ x Var(ax) = (a s x) is the sample variance. Lecture #3-9/7/005 Slide 36 of 55

37 Covariance of x and y If x and y are measured on each unit (e.g., person) we are said to have a bivariate random variable (x, y). Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Expectation E(x + y) = E(x) + E(y) E(xy) = E(x)E(y) ONLY IF x and y are Independent It is possible that x and y have some relationship and so they will tend to covary. Height and weight You may live closer the longer you have been going to school here. Lecture #3-9/7/005 Slide 37 of 55

38 Cov(x, y) Population Covariance (σ xy ) σ xy = E[(x µ x )(y µ y )] = E(xy) µ x µ y. E(xy) = E(x)E(y) ONLY IF X and Y are Independent. The population covariance (σ xy ) equals zero if x and y are independent (i.e., orthogonal). Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) The sample covariance (s xy ) n i=1 s xy = (x i x)(y i ȳ) n Sample variance is never equal to the population variance. Lecture #3-9/7/005 Slide 38 of 55

39 Corr(x, y) One problem with the covariance is that it is dependent on scale. Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) By multiplying the variable by a constant the covariance will change. We standardize covariance by computing the correlation. Correlation is the same when variable are transformed using a linear transformation. Correlation is the same as the covariance WHEN x and y are standardized (i.e., have variance 1). Lecture #3-9/7/005 Slide 39 of 55

40 Corr(x, y) Population: Corr(x, y) = Corr(x, y) = ρ xy = σ xy σ x σ y E[(x µ x )(y µ y )] E[(x µx ) ] E[(y µ y ) ] Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Sample: Corr(x, y) = r xy = s xy s x s y [(xi µ x )(y i µ y )] Corr(x, y) = [(xi x) ] [(yi ȳ) ] Lecture #3-9/7/005 Slide 40 of 55

41 Corr(x, y) Also, Correlation is related to the angle (θ) between two normalized vectors a and b, where cos(θ) = r xy = a a + b b (b a) (b a) (a a)(b b) Random Variables Mean Variance Linear Combination of x Covariance of x and y Cov(x, y) Corr(x, y) Lecture #3-9/7/005 Slide 41 of 55

42 More Dimensions Now we are going to start generalizing the univariate concepts to multivariate data. We begin by defining multivariate data as a vector of p observations that have been taken from a single entity. We will use x to indicate a (p x 1) vector from a single entity. More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix We will use X to indicate an (n x p) matrix containing n entities, each with measurements of p variables. Lecture #3-9/7/005 Slide 4 of 55

43 More Dimensions X = x 11 x 1 x 1p x 1 x x p.... x n1 x n x np X = x 1 x. x n More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Lecture #3-9/7/005 Slide 43 of 55

44 SAS Example #5 Using some multivariate statistical methods I have yet to discuss, I estimated a set of seven different computer abilities for 3000 students in a school district. More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Lecture #3-9/7/005 Slide 44 of 55

45 Mean Mean is a measure of central tendency. More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Expectation (E(X)) - describes the vector of population means E(x 1 ) µ 1 E(x ) E(x) =. E(x p ) = µ.. µ p Lecture #3-9/7/005 Slide 45 of 55

46 Mean Mean ( x) is the sample mean vector. Will never equal the population mean vector. x = 1 n n x i = 1 n X 1. i=1 More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix x = x 1 x.. x p Lecture #3-9/7/005 Slide 46 of 55

47 Variance/Covariance Originally when there was a single variable we used variance Once we had two variables we could also describe the way the two variables covaried More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Using this same idea if there are several variables to fully capture the variability of our data we must describe both: variability of a variable covariance of a variable with all other variables These are stored and reported in a variance/covariance matrix Lecture #3-9/7/005 Slide 47 of 55

48 Variance/Covariance Population Σ = σ 11 σ 1 σ 1p σ 1 σ σ p.... σ p1 σ p σ pp More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Sample S = s 11 s 1 s 1p s 1 s s p.... s p1 s p s pp Lecture #3-9/7/005 Slide 48 of 55

49 Variance Variance is a measure of spread/range. Population Variance (Σ) - describes the population variance E((x µ x )(x µ x ) ) = Σ Variance (S) is the sample variance. More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Will never equal the population variance S = 1 n 1 n (x i x) = 1 n 1 (X 1 x ) (X 1 x ). i=1 Lecture #3-9/7/005 Slide 49 of 55

50 Correlation Matrix Problem with covariance is that it can be difficult to interpret So just like in the bivariate case we can compute the correlation Recall that correlation ρ xy = σ xy σ x σ y More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Lecture #3-9/7/005 Slide 50 of 55

51 Variance/Covariance So by substituting in, our correlation matrix should look like Population More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Σ = σ 11 σ 1 σ 1 σ 1 σ 1 σ σ 1 σ σ σ 1 σ σ. σ p1 σ p σ 1 Notice the pattern?. σ p. σ p σ σ 1p σ 1 σ p σ p σ σ p. σ pp σ p σ p It looks like something that we saw using diagonal matrices Lecture #3-9/7/005 Slide 51 of 55

52 Correlation Define a diagonal matrix with the inverse of the standard deviations (D 1 ) as: More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Population Σ = σ σ σ p Lecture #3-9/7/005 Slide 5 of 55

53 Correlation Then we can define our correlation matrix as: Population P = D 1 ΣD 1 While all of this has been phrased as population we can also compute the sample correlation R = D 1 SD 1 More Dimensions SAS Example #5 Mean Variance/Covariance Variance Correlation Matrix Lecture #3-9/7/005 Slide 53 of 55

54 Final Thought Final Thought Next Class Matrix algebra makes the technical things in life easier. The applications of matrices will be demonstrated throughout the rest of this course. Virtually all of statistics can be expressed with matrices. Once you learn to read and think in matrices, statistics becomes much easier. Lecture #3-9/7/005 Slide 54 of 55

55 Next Time Geometric implications of multivariate descriptive statistics (Sections of Chapter 3). The multivariate normal distribution (Chapter 4). Final Thought Next Class Lecture #3-9/7/005 Slide 55 of 55

A Introduction to Matrix Algebra and the Multivariate Normal Distribution

A Introduction to Matrix Algebra and the Multivariate Normal Distribution A Introduction to Matrix Algebra and the Multivariate Normal Distribution PRE 905: Multivariate Analysis Spring 2014 Lecture 6 PRE 905: Lecture 7 Matrix Algebra and the MVN Distribution Today s Class An

More information

An Introduction to Matrix Algebra

An Introduction to Matrix Algebra An Introduction to Matrix Algebra EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #8 EPSY 905: Matrix Algebra In This Lecture An introduction to matrix algebra Ø Scalars, vectors, and matrices

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Introduction to Matrix Algebra and the Multivariate Normal Distribution

Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Structural Equation Modeling Lecture #2 January 18, 2012 ERSH 8750: Lecture 2 Motivation for Learning the Multivariate

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Columbus State Community College Mathematics Department Public Syllabus

Columbus State Community College Mathematics Department Public Syllabus Columbus State Community College Mathematics Department Public Syllabus Course and Number: MATH 2568 Elementary Linear Algebra Credits: 4 Class Hours Per Week: 4 Prerequisites: MATH 2153 with a C or higher

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques 1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Relevant

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34 Linear Algebra /34 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct axes)

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2 Week 22 Equations, Matrices and Transformations Coefficient Matrix and Vector Forms of a Linear System Suppose we have a system of m linear equations in n unknowns a 11 x 1 + a 12 x 2 + + a 1n x n b 1

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

Matrix Algebra. Learning Objectives. Size of Matrix

Matrix Algebra. Learning Objectives. Size of Matrix Matrix Algebra 1 Learning Objectives 1. Find the sum and difference of two matrices 2. Find scalar multiples of a matrix 3. Find the product of two matrices 4. Find the inverse of a matrix 5. Solve a system

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information