Basic Concepts in Matrix Algebra

Size: px
Start display at page:

Download "Basic Concepts in Matrix Algebra"

Transcription

1 Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1 x 2... x p ] A vector can be represented in p-space as a directed line with components along the p axes. 38

2 Basic Matrix Concepts (cont d) Two vectors can be added if they have the same dimension. Addition is carried out elementwise. x + y = x 1 x 2. x p + y 1 y 2. y p = x 1 + y 1 x 2 + y 2. x p + y p A vector can be contracted or expanded if multiplied by a constant c. Multiplication is also elementwise. cx = c x 1 x 2. x p = cx 1 cx 2. cx p 39

3 Examples x = and x = [ ] 6 x = = ( 4) = x + y = = =

4 Basic Matrix Concepts (cont d) Multiplication by c > 0 does not change the direction of x. Direction is reversed if c < 0. 41

5 Basic Matrix Concepts (cont d) The length of a vector x is the Euclidean distance from the origin L x = p x 2 j j=1 Multiplication of a vector x by a constant c changes the length: L cx = p j=1 c 2 x 2 j = c p j=1 x 2 j = c L x If c = L 1, then cx is a vector of unit length. x 42

6 The length of x = L x = is Examples (2) 2 + (1) 2 + ( 4) 2 + ( 2) 2 = 25 = 5 Then z = = is a vector of unit length. 43

7 Angle Between Vectors Consider two vectors x and y in two dimensions. If θ 1 is the angle between x and the horizontal axis and θ 2 > θ 1 is the angle between y and the horizontal axis, then cos(θ 1 ) = x 1 cos(θ 2 ) = y 1 L x L y sin(θ 1 ) = x 2 sin(θ 2 ) = y 2, L x L y If θ is the angle between x and y, then cos(θ) = cos(θ 2 θ 1 ) = cos(θ 2 ) cos(θ 1 ) + sin(θ 2 ) sin(θ 1 ). Then cos(θ) = x 1y 1 + x 2 y 2 L x L y. 44

8 Angle Between Vectors (cont d) 45

9 Inner Product The inner product between two vectors x and y is x y = p j=1 x j y j. Then L x = x x, L y = y y and cos(θ) = x y (x x) (y y) Since cos(θ) = 0 when x y = 0 and cos(θ) = 0 for θ = 90 or θ = 270, then the vectors are perpendicular (orthogonal) when x y = 0. 46

10 Linear Dependence Two vectors, x and y, are linearly dependent if there exist two constants c 1 and c 2, not both zero, such that c 1 x + c 2 y = 0 If two vectors are linearly dependent, then one can be written as a linear combination of the other. From above: x = (c 2 /c 1 )y k vectors, x 1, x 2,..., x k, are linearly dependent if there exist constants (c 1, c 2,..., c k ) not all zero such that k j=1 c j x j = 0 47

11 Vectors of the same dimension that are not linearly dependent are said to be linearly independent

12 Linear Independence-example Let x 1 = 1 2 1, x 2 = 1 0 1, x 3 = Then c 1 x 1 + c 2 x 2 + c 3 x 3 = 0 if c 1 + c 2 + c 3 = 0 2c c 3 = 0 c 1 c 2 + c 3 = 0 The unique solution is c 1 = c 2 = c 3 = 0, so the vectors are linearly independent. 48

13 Projections The projection of x on y is defined by Projection of x on y = x y y y y = x y L y 1 L y y. The length of the projection is Length of projection = x y L y = L x x y L x L y = L x cos(θ), where θ is the angle between x and y. 49

14 Matrices A matrix A is an array of elements a ij with n rows and p columns: A = a 11 a 12 a 1p a 21 a 22 a 2p a n1 a n2 a np The transpose A has p rows and n columns. The j-th row of A is the j-th column of A A = a 11 a 21 a n1 a 12 a 22 a n a 1p a 2p a np 50

15 Matrix Algebra Multiplication of A by a constant c is carried out element by element. ca = ca 11 ca 12 ca 1p ca 21 ca 22 ca 2p ca n1 ca n2 ca np 51

16 Matrix Addition Two matrices A n p = {a ij } and B n p = {b ij } of the same dimensions can be added element by element. The resulting matrix is C n p = {c ij } = {a ij + b ij } C = A + B = = a 11 a 12 a 1p a 21 a 22 a 2p a n1 a n2 a np + b 11 b 12 b 1p b 21 b 22 b 2p b n1 b n2 b np a 11 + b 11 a 12 + b 12 a 1p + b 1p a 21 + b 21 a 22 + b 22 a 2p + b 2p a n1 + b n1 a n2 + b n2 a np + b np 52

17 Examples [ ] = [ ] = [ ] [ ] + [ ] = [ ] 53

18 Matrix Multiplication Multiplication of two matrices A n p and B m q can be carried out only if the matrices are compatible for multiplication: A n p B m q : compatible if p = m. B m q A n p : compatible if q = n. The element in the i-th row and the j-th column of A B is the inner product of the i-th row of A with the j-th column of B. 54

19 Multiplication Examples [ ] = [ ] [ ] [ ] = [ ] [ ] [ ] = [ ] 55

20 Identity Matrix An identity matrix, denoted by I, is a square matrix with 1 s along the main diagonal and 0 s everywhere else. For example, I 2 2 = [ ] and I 3 3 = If A is a square matrix, then AI = IA = A. I n n A n p = A n p but A n p I n n is not defined for p n. 56

21 Symmetric Matrices A square matrix is symmetric if A = A. If a square matrix A has elements {a ij }, then A is symmetric if a ij = a ji. Examples [ ]

22 Inverse Matrix Consider two square matrices A k k and B k k. If AB = BA = I then B is the inverse of A, denoted A 1. The inverse of A exists only if the columns of A are linearly independent. If A = diag{a ij } then A 1 = diag{1/a ij }. 58

23 Inverse Matrix For a 2 2 matrix A, the inverse is A 1 = [ a11 a 12 a 21 a 22 ] 1 = 1 det(a) [ a 22 a 12 a 21 a 11 ], where det(a) = (a 11 a 22 ) (a 12 a 21 ) denotes the determinant of A. 59

24 Orthogonal Matrices A square matrix Q is orthogonal if or Q = Q 1. QQ = Q Q = I, If Q is orthogonal, its rows and columns have unit length (q j q j = 1) and are mutually perpendicular (q j q k = 0 for any j k). 60

25 Eigenvalues and Eigenvectors A square matrix A has an eigenvalue λ with corresponding eigenvector z 0 if Az = λz The eigenvalues of A are the solution to A λi = 0. A normalized eigenvector (of unit length) is denoted by e. A k k matrix A has k pairs of eigenvalues and eigenvectors λ 1, e 1 λ 2, e 2... λ k, e k where e i e i = 1, e i e j = 0 and the eigenvectors are unique up to a change in sign unless two or more eigenvalues are equal. 61

26 Spectral Decomposition Eigenvalues and eigenvectors will play an important role in this course. For example, principal components are based on the eigenvalues and eigenvectors of sample covariance matrices. The spectral decomposition of a k k symmetric matrix A is A = λ 1 e 1 e 1 + λ 2e 2 e λ ke k e k = [e 1 e 2 e k ] = P ΛP λ λ λ k [e 1 e 2 e k ] 62

27 Determinant and Trace The trace of a k k matrix A is the sum of the diagonal elements, i.e., trace(a) = k i=1 a ii The trace of a square, symmetric matrix A is the sum of the eigenvalues, i.e., trace(a) = k i=1 a ii = k i=1 λ i The determinant of a square, symmetric matrix A is the product of the eigenvalues, i.e., A = k i=1 λ i 63

28 Rank of a Matrix The rank of a square matrix A is The number of linearly independent rows The number of linearly independent columns The number of non-zero eigenvalues The inverse of a k k matrix A exists, if and only if i.e., there are no zero eigenvalues rank(a) = k 64

29 Positive Definite Matrix For a k k symmetric matrix A and a vector x = [x 1, x 2,..., x k ] the quantity x Ax is called a quadratic form Note that x Ax = k i=1 kj=1 a ij x i x j If x Ax 0 for any vector x, both A and the quadratic form are said to be non-negative definite. If x Ax > 0 for any vector x 0, both A and the quadratic form are said to be positive definite. 65

30 Example 2.11 Show that the matrix of the quadratic form 3x x x 1 x 2 is positive definite. For A = [ ], the eigenvalues are λ 1 = 4, λ 2 = 1. Then A = 4e 1 e 1 + e 2e 2. Write x Ax = 4x e 1 e 1 x + x e 2 e 2 x = 4y1 2 + y2 2 0, and is zero only for y 1 = y 2 = 0. 66

31 Example 2.11 (cont d) y 1, y 2 cannot be zero because [ ] [ y1 e = 1 y 2 e 2 ] [ x1 x 2 ] = P 2 2 x 2 1 with P orthonormal so that (P ) 1 = P. Then x = P y and since x 0 it follows that y 0. Using the spectral decomposition, we can show that: A is positive definite if all of its eigenvalues are positive. A is non-negative definite if all of its eigenvalues are 0. 67

32 Distance and Quadratic Forms For x = [x 1, x 2,..., x p ] and a p p positive definite matrix A, d 2 = x Ax > 0 when x 0. Thus, a positive definite quadratic form can be interpreted as a squared distance of x from the origin and vice versa. The squared distance from x to a fixed point µ is given by the quadratic form (x µ) A(x µ). 68

33 Distance and Quadratic Forms (cont d) We can interpret distance in terms of eigenvalues and eigenvectors of A as well. Any point x at constant distance c from the origin satisfies x Ax = x ( p j=1 λ j e j e j )x = p j=1 the expression for an ellipsoid in p dimensions. λ j (x e j ) 2 = c 2, Note that the point x = cλ 1/2 1 e 1 is at a distance c (in the direction of e 1 ) from the origin because it satisfies x Ax = c 2. The same is true for points x = cλ 1/2 j e j, j = 1,..., p. Thus, all points at distance c lie on an ellipsoid with axes in the directions of the eigenvectors and with lengths proportional to λ 1/2 j. 69

34 Distance and Quadratic Forms (cont d) 70

35 Square-Root Matrices Spectral decomposition of a positive definite matrix A yields A = p j=1 λ j e j e j = P ΛP, with Λ k k = diag{λ j }, all λ j > 0, and P k k = [e 1 e 2... e p ] an orthonormal matrix of eigenvectors. Then A 1 = P Λ 1 P = p j=1 1 λ j e j e j With Λ 1/2 = diag{λ 1/2 j }, a square-root matrix is A 1/2 = P Λ 1/2 P = p j=1 λj e j e j 71

36 Square-Root Matrices The square root of a positive definite matrix A has the following properties: 1. Symmetry: (A 1/2 ) = A 1/2 2. A 1/2 A 1/2 = A 3. A 1/2 = p j=1 λ 1/2 j e j e j = P Lambda 1/2 P 4. A 1/2 A 1/2 = A 1/2 A 1/2 = I 5. A 1/2 A 1/2 = A 1 Note that there are other ways of defining the square root of a positive definite matrix: in the Cholesky decomposition A = LL, with L a matrix of lower triangular form, L is also called a square root of A. 72

37 Random Vectors and Matrices A random matrix (vector) is a matrix (vector) whose elements are random variables. If X n p is a random matrix, the expected value of X is the n p matrix where E(X) = E(X 11 ) E(X 12 ) E(X 1p ) E(X 21 ) E(X 22 ) E(X 2p )... E(X n1 ) E(X n2 ) E(X np ) E(X ij ) = x ijf ij (x ij )dx ij with f ij (x ij ) the density function of the continuous random variable X ij. If X is a discrete random variable, we compute its expectation as a sum rather than an integral., 73

38 Linear Combinations The usual rules for expectations apply. If X and Y are two random matrices and A and B are two constant matrices of the appropriate dimensions, then E(X + Y ) = E(X) + E(Y ) E(AX) = AE(X) E(AXB) = AE(X)B E(AX + BY ) = AE(X) + BE(Y ) Further, if c is a scalar-valued constant then E(cX) = ce(x). 74

39 Mean Vectors and Covariance Matrices Suppose that X is p 1 (continuous) random vector drawn from some p dimensional distribution. Each element of X, say X j has its own marginal distribution with marginal mean µ j and variance σ jj defined in the usual way: µ j = σ jj = x jf j (x j )dx j (x j µ j ) 2 f j (x j )dx j 75

40 Mean Vectors and Covariance Matrices (cont d) To examine association between a pair of random variables we need to consider their joint distribution. A measure of the linear association between pairs of variables is given by the covariance σ jk = E [ (X j µ j )(X k µ k ) ] = (x j µ j )(x k µ k )f jk (x j, x k )dx j dx k. 76

41 Mean Vectors and Covariance Matrices (cont d) If the joint density function f jk (x j, x k ) can be written as the product of the two marginal densities, e.g., then X j and X k are independent. f jk (x j, x k ) = f j (x j )f k (x k ), More generally, the p dimensional random vector X has mutually independent elements if the p dimensional joint density function can be written as the product of the p univariate marginal densities. If two random variables X j and X k are independent, then their covariance is equal to 0. [Converse is not always true.] 77

42 Mean Vectors and Covariance Matrices (cont d) We use µ to denote the p 1 vector of marginal population means and use Σ to denote the p p population variance-covariance matrix: Σ = E [ (X µ)(x µ) ]. If we carry out the multiplication (outer product)then Σ is equal to: E (X 1 µ 1 ) 2 (X 1 µ 1 )(X 2 µ 2 ) (X 1 µ 1 )(X p µ p ) (X 2 µ 2 )(X 1 µ 1 ) (X 2 µ 2 ) 2 (X 2 µ 2 )(X p µ p )... (X p µ p )(X 1 µ 1 ) (X p µ p )(X 2 µ 2 ) (X p µ p ) 2. 78

43 Mean Vectors and Covariance Matrices (cont d) By taking expectations element-wise we find that Σ = σ 11 σ 12 σ 1p σ 21 σ 22 σ 2p... σ p1 σ p2 σ pp. Since σ jk = σ kj for all j k we note that Σ is symmetric. Σ is also non-negative definite 79

44 Correlation Matrix The population correlation matrix is the p p matrix with off-diagonal elements equal to ρ jk and diagonal elements equal to 1. 1 ρ 12 ρ 1p ρ 21 1 ρ 2p... ρ p1 ρ p2 1. Since ρ ij = ρ ji the correlation matrix is symmetric The correlation matrix is also non-negative definite 80

45 Correlation Matrix (cont d) The p p population standard deviation matrix V 1/2 is a diagonal matrix with σ jj along the diagonal and zeros in all off-diagonal positions. Then Σ = V 1/2 P V 1/2 and the population correlation matrix is (V 1/2 ) 1 Σ(V 1/2 ) 1 Given Σ, we can easily obtain the correlation matrix 81

46 Partitioning Random vectors If we partition the random p 1 vector X into two components X 1, X 2 of dimensions q 1 and (p q) 1 respectively, then the mean vector and the variance-covariance matrix need to be partitioned accordingly. Partitioned mean vector: [ X1 ] [ E(X1 ) ] [ µ1 ] E(X) = E X 2 = E(X 2 ) = µ 2 Partitioned variance-covariance matrix: Σ = [ V ar(x 1 ) Cov(X 1, X 2 ) Cov(X 2, X 1 ) V ar(x 2 ) ] [ ] Σ11 Σ 12 = Σ 12 Σ 22 where Σ 11 is q q, Σ 12 is q (p q) and Σ 22 is (p q) (p q)., 82

47 Partitioning Covariance Matrices (cont d) Σ 11, Σ 22 are the variance-covariance matrices of the sub-vectors X 1, X 2, respectively. The off-diagonal elements in those two matrices reflect linear associations among elements within each sub-vector. There are no variances in Σ 12, only covariances. These covariancs reflect linear associations between elements in the two different sub-vectors. 83

48 Linear Combinations of Random variables Let X be a p 1 vector with mean µ and variance covariance matrix Σ, and let c be a p 1 vector of constants. Then the linear combination c X has mean and variance: E(c X) = c µ, and V ar(c X) = c Σc In general, the mean and variance of a q 1 vector of linear combinations Z = C q p X p 1 are µ Z = Cµ X and Σ Z = CΣ X C. 84

49 Cauchy-Schwarz Inequality We will need some of the results below to derive some maximization results later in the course. Cauchy-Schwarz inequality Let b and d be any two p 1 vectors. Then, (b d) 2 (b b)(d d) with equality only if b = cd for some scalar constant c. Proof: The equality is obvious for b = 0 or d = 0. For other cases, consider b cd for any constant c 0. Then if b cd 0, we have 0 < (b cd) (b cd) = b b 2c(b d) + c 2 d d, since b cd must have positive length. 85

50 Cauchy-Schwarz Inequality We can add and subtract (b d) 2 /(d d) to obtain 0 < b b 2c(b d)+c 2 d d (b d) 2 d d +(b d) 2 d d = b b (b d) 2 Since c can be anything, we can choose c = b d/d d. Then, 0 < b b (b d) 2 d (b d) 2 < (b b)(d d) d for b cd (otherwise, we have equality). d d +(d d) ( c b d d d ) 2 86

51 Extended Cauchy-Schwarz Inequality If b and d are any two p 1 vectors and B is a p p positive definite matrix, then (b d) 2 (b Bb)(d B 1 d) with equality if and only if b = cb 1 d or d = cbb for some constant c. Proof: Consider B 1/2 = p Then we can write i=1 λi e i e i, and B 1/2 = p i=1 1 ( λ i ) e ie i. b d = b Id = b B 1/2 B 1/2 d = (B 1/2 b) (B 1/2 d) = b d. To complete the proof, simply apply the Cauchy-Schwarz inequality to the vectors b and d. 87

52 Optimization Let B be positive definite and let d be any p 1 vector. Then max x 0 (x d) 2 x Bx = d B 1 d is attained when x = cb 1 d for any constant c 0. Proof: By the extended Cauchy-Schwartz inequality: (x d) 2 (x Bx)(d B 1 d). Since x 0 and B is positive definite, x Bx > 0 and we can divide both sides by x Bx to get an upper bound (x d) 2 x Bx d B 1 d. Differentiating the left side with respect to x shows that maximum is attained at x = cb 1 d. 88

53 Maximization of a Quadratic Form on a Unit Sphere B is positive definite with eigenvalues λ 1 λ 2 λ p 0 and associated eigenvectors (normalized) e 1, e 2,, e p. Then max x 0 x Bx x x = λ 1, attained when x = e 1 min x 0 x Bx x x = λ p, attained when x = e p. Furthermore, for k = 1, 2,, p 1, x Bx max x e 1,e 2,,e k x x = λ k+1 is attained when x = e k+1. See proof at end of chapter 2 in the textbook (pages 80-81). 89

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Lecture 3: Matrix and Matrix Operations

Lecture 3: Matrix and Matrix Operations Lecture 3: Matrix and Matrix Operations Representation, row vector, column vector, element of a matrix. Examples of matrix representations Tables and spreadsheets Scalar-Matrix operation: Scaling a matrix

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

4 Linear Algebra Review

4 Linear Algebra Review 4 Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the course 41 Vectors and Matrices For the context of data analysis,

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30 Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Introduc)on to linear algebra

Introduc)on to linear algebra Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

4 Linear Algebra Review

4 Linear Algebra Review Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010 A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Stat 206: Sampling theory, sample moments, mahalanobis

Stat 206: Sampling theory, sample moments, mahalanobis Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

MATH 106 LINEAR ALGEBRA LECTURE NOTES

MATH 106 LINEAR ALGEBRA LECTURE NOTES MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information