Basic Concepts in Matrix Algebra


 Kimberly Robbins
 1 years ago
 Views:
Transcription
1 Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1 x 2... x p ] A vector can be represented in pspace as a directed line with components along the p axes. 38
2 Basic Matrix Concepts (cont d) Two vectors can be added if they have the same dimension. Addition is carried out elementwise. x + y = x 1 x 2. x p + y 1 y 2. y p = x 1 + y 1 x 2 + y 2. x p + y p A vector can be contracted or expanded if multiplied by a constant c. Multiplication is also elementwise. cx = c x 1 x 2. x p = cx 1 cx 2. cx p 39
3 Examples x = and x = [ ] 6 x = = ( 4) = x + y = = =
4 Basic Matrix Concepts (cont d) Multiplication by c > 0 does not change the direction of x. Direction is reversed if c < 0. 41
5 Basic Matrix Concepts (cont d) The length of a vector x is the Euclidean distance from the origin L x = p x 2 j j=1 Multiplication of a vector x by a constant c changes the length: L cx = p j=1 c 2 x 2 j = c p j=1 x 2 j = c L x If c = L 1, then cx is a vector of unit length. x 42
6 The length of x = L x = is Examples (2) 2 + (1) 2 + ( 4) 2 + ( 2) 2 = 25 = 5 Then z = = is a vector of unit length. 43
7 Angle Between Vectors Consider two vectors x and y in two dimensions. If θ 1 is the angle between x and the horizontal axis and θ 2 > θ 1 is the angle between y and the horizontal axis, then cos(θ 1 ) = x 1 cos(θ 2 ) = y 1 L x L y sin(θ 1 ) = x 2 sin(θ 2 ) = y 2, L x L y If θ is the angle between x and y, then cos(θ) = cos(θ 2 θ 1 ) = cos(θ 2 ) cos(θ 1 ) + sin(θ 2 ) sin(θ 1 ). Then cos(θ) = x 1y 1 + x 2 y 2 L x L y. 44
8 Angle Between Vectors (cont d) 45
9 Inner Product The inner product between two vectors x and y is x y = p j=1 x j y j. Then L x = x x, L y = y y and cos(θ) = x y (x x) (y y) Since cos(θ) = 0 when x y = 0 and cos(θ) = 0 for θ = 90 or θ = 270, then the vectors are perpendicular (orthogonal) when x y = 0. 46
10 Linear Dependence Two vectors, x and y, are linearly dependent if there exist two constants c 1 and c 2, not both zero, such that c 1 x + c 2 y = 0 If two vectors are linearly dependent, then one can be written as a linear combination of the other. From above: x = (c 2 /c 1 )y k vectors, x 1, x 2,..., x k, are linearly dependent if there exist constants (c 1, c 2,..., c k ) not all zero such that k j=1 c j x j = 0 47
11 Vectors of the same dimension that are not linearly dependent are said to be linearly independent
12 Linear Independenceexample Let x 1 = 1 2 1, x 2 = 1 0 1, x 3 = Then c 1 x 1 + c 2 x 2 + c 3 x 3 = 0 if c 1 + c 2 + c 3 = 0 2c c 3 = 0 c 1 c 2 + c 3 = 0 The unique solution is c 1 = c 2 = c 3 = 0, so the vectors are linearly independent. 48
13 Projections The projection of x on y is defined by Projection of x on y = x y y y y = x y L y 1 L y y. The length of the projection is Length of projection = x y L y = L x x y L x L y = L x cos(θ), where θ is the angle between x and y. 49
14 Matrices A matrix A is an array of elements a ij with n rows and p columns: A = a 11 a 12 a 1p a 21 a 22 a 2p a n1 a n2 a np The transpose A has p rows and n columns. The jth row of A is the jth column of A A = a 11 a 21 a n1 a 12 a 22 a n a 1p a 2p a np 50
15 Matrix Algebra Multiplication of A by a constant c is carried out element by element. ca = ca 11 ca 12 ca 1p ca 21 ca 22 ca 2p ca n1 ca n2 ca np 51
16 Matrix Addition Two matrices A n p = {a ij } and B n p = {b ij } of the same dimensions can be added element by element. The resulting matrix is C n p = {c ij } = {a ij + b ij } C = A + B = = a 11 a 12 a 1p a 21 a 22 a 2p a n1 a n2 a np + b 11 b 12 b 1p b 21 b 22 b 2p b n1 b n2 b np a 11 + b 11 a 12 + b 12 a 1p + b 1p a 21 + b 21 a 22 + b 22 a 2p + b 2p a n1 + b n1 a n2 + b n2 a np + b np 52
17 Examples [ ] = [ ] = [ ] [ ] + [ ] = [ ] 53
18 Matrix Multiplication Multiplication of two matrices A n p and B m q can be carried out only if the matrices are compatible for multiplication: A n p B m q : compatible if p = m. B m q A n p : compatible if q = n. The element in the ith row and the jth column of A B is the inner product of the ith row of A with the jth column of B. 54
19 Multiplication Examples [ ] = [ ] [ ] [ ] = [ ] [ ] [ ] = [ ] 55
20 Identity Matrix An identity matrix, denoted by I, is a square matrix with 1 s along the main diagonal and 0 s everywhere else. For example, I 2 2 = [ ] and I 3 3 = If A is a square matrix, then AI = IA = A. I n n A n p = A n p but A n p I n n is not defined for p n. 56
21 Symmetric Matrices A square matrix is symmetric if A = A. If a square matrix A has elements {a ij }, then A is symmetric if a ij = a ji. Examples [ ]
22 Inverse Matrix Consider two square matrices A k k and B k k. If AB = BA = I then B is the inverse of A, denoted A 1. The inverse of A exists only if the columns of A are linearly independent. If A = diag{a ij } then A 1 = diag{1/a ij }. 58
23 Inverse Matrix For a 2 2 matrix A, the inverse is A 1 = [ a11 a 12 a 21 a 22 ] 1 = 1 det(a) [ a 22 a 12 a 21 a 11 ], where det(a) = (a 11 a 22 ) (a 12 a 21 ) denotes the determinant of A. 59
24 Orthogonal Matrices A square matrix Q is orthogonal if or Q = Q 1. QQ = Q Q = I, If Q is orthogonal, its rows and columns have unit length (q j q j = 1) and are mutually perpendicular (q j q k = 0 for any j k). 60
25 Eigenvalues and Eigenvectors A square matrix A has an eigenvalue λ with corresponding eigenvector z 0 if Az = λz The eigenvalues of A are the solution to A λi = 0. A normalized eigenvector (of unit length) is denoted by e. A k k matrix A has k pairs of eigenvalues and eigenvectors λ 1, e 1 λ 2, e 2... λ k, e k where e i e i = 1, e i e j = 0 and the eigenvectors are unique up to a change in sign unless two or more eigenvalues are equal. 61
26 Spectral Decomposition Eigenvalues and eigenvectors will play an important role in this course. For example, principal components are based on the eigenvalues and eigenvectors of sample covariance matrices. The spectral decomposition of a k k symmetric matrix A is A = λ 1 e 1 e 1 + λ 2e 2 e λ ke k e k = [e 1 e 2 e k ] = P ΛP λ λ λ k [e 1 e 2 e k ] 62
27 Determinant and Trace The trace of a k k matrix A is the sum of the diagonal elements, i.e., trace(a) = k i=1 a ii The trace of a square, symmetric matrix A is the sum of the eigenvalues, i.e., trace(a) = k i=1 a ii = k i=1 λ i The determinant of a square, symmetric matrix A is the product of the eigenvalues, i.e., A = k i=1 λ i 63
28 Rank of a Matrix The rank of a square matrix A is The number of linearly independent rows The number of linearly independent columns The number of nonzero eigenvalues The inverse of a k k matrix A exists, if and only if i.e., there are no zero eigenvalues rank(a) = k 64
29 Positive Definite Matrix For a k k symmetric matrix A and a vector x = [x 1, x 2,..., x k ] the quantity x Ax is called a quadratic form Note that x Ax = k i=1 kj=1 a ij x i x j If x Ax 0 for any vector x, both A and the quadratic form are said to be nonnegative definite. If x Ax > 0 for any vector x 0, both A and the quadratic form are said to be positive definite. 65
30 Example 2.11 Show that the matrix of the quadratic form 3x x x 1 x 2 is positive definite. For A = [ ], the eigenvalues are λ 1 = 4, λ 2 = 1. Then A = 4e 1 e 1 + e 2e 2. Write x Ax = 4x e 1 e 1 x + x e 2 e 2 x = 4y1 2 + y2 2 0, and is zero only for y 1 = y 2 = 0. 66
31 Example 2.11 (cont d) y 1, y 2 cannot be zero because [ ] [ y1 e = 1 y 2 e 2 ] [ x1 x 2 ] = P 2 2 x 2 1 with P orthonormal so that (P ) 1 = P. Then x = P y and since x 0 it follows that y 0. Using the spectral decomposition, we can show that: A is positive definite if all of its eigenvalues are positive. A is nonnegative definite if all of its eigenvalues are 0. 67
32 Distance and Quadratic Forms For x = [x 1, x 2,..., x p ] and a p p positive definite matrix A, d 2 = x Ax > 0 when x 0. Thus, a positive definite quadratic form can be interpreted as a squared distance of x from the origin and vice versa. The squared distance from x to a fixed point µ is given by the quadratic form (x µ) A(x µ). 68
33 Distance and Quadratic Forms (cont d) We can interpret distance in terms of eigenvalues and eigenvectors of A as well. Any point x at constant distance c from the origin satisfies x Ax = x ( p j=1 λ j e j e j )x = p j=1 the expression for an ellipsoid in p dimensions. λ j (x e j ) 2 = c 2, Note that the point x = cλ 1/2 1 e 1 is at a distance c (in the direction of e 1 ) from the origin because it satisfies x Ax = c 2. The same is true for points x = cλ 1/2 j e j, j = 1,..., p. Thus, all points at distance c lie on an ellipsoid with axes in the directions of the eigenvectors and with lengths proportional to λ 1/2 j. 69
34 Distance and Quadratic Forms (cont d) 70
35 SquareRoot Matrices Spectral decomposition of a positive definite matrix A yields A = p j=1 λ j e j e j = P ΛP, with Λ k k = diag{λ j }, all λ j > 0, and P k k = [e 1 e 2... e p ] an orthonormal matrix of eigenvectors. Then A 1 = P Λ 1 P = p j=1 1 λ j e j e j With Λ 1/2 = diag{λ 1/2 j }, a squareroot matrix is A 1/2 = P Λ 1/2 P = p j=1 λj e j e j 71
36 SquareRoot Matrices The square root of a positive definite matrix A has the following properties: 1. Symmetry: (A 1/2 ) = A 1/2 2. A 1/2 A 1/2 = A 3. A 1/2 = p j=1 λ 1/2 j e j e j = P Lambda 1/2 P 4. A 1/2 A 1/2 = A 1/2 A 1/2 = I 5. A 1/2 A 1/2 = A 1 Note that there are other ways of defining the square root of a positive definite matrix: in the Cholesky decomposition A = LL, with L a matrix of lower triangular form, L is also called a square root of A. 72
37 Random Vectors and Matrices A random matrix (vector) is a matrix (vector) whose elements are random variables. If X n p is a random matrix, the expected value of X is the n p matrix where E(X) = E(X 11 ) E(X 12 ) E(X 1p ) E(X 21 ) E(X 22 ) E(X 2p )... E(X n1 ) E(X n2 ) E(X np ) E(X ij ) = x ijf ij (x ij )dx ij with f ij (x ij ) the density function of the continuous random variable X ij. If X is a discrete random variable, we compute its expectation as a sum rather than an integral., 73
38 Linear Combinations The usual rules for expectations apply. If X and Y are two random matrices and A and B are two constant matrices of the appropriate dimensions, then E(X + Y ) = E(X) + E(Y ) E(AX) = AE(X) E(AXB) = AE(X)B E(AX + BY ) = AE(X) + BE(Y ) Further, if c is a scalarvalued constant then E(cX) = ce(x). 74
39 Mean Vectors and Covariance Matrices Suppose that X is p 1 (continuous) random vector drawn from some p dimensional distribution. Each element of X, say X j has its own marginal distribution with marginal mean µ j and variance σ jj defined in the usual way: µ j = σ jj = x jf j (x j )dx j (x j µ j ) 2 f j (x j )dx j 75
40 Mean Vectors and Covariance Matrices (cont d) To examine association between a pair of random variables we need to consider their joint distribution. A measure of the linear association between pairs of variables is given by the covariance σ jk = E [ (X j µ j )(X k µ k ) ] = (x j µ j )(x k µ k )f jk (x j, x k )dx j dx k. 76
41 Mean Vectors and Covariance Matrices (cont d) If the joint density function f jk (x j, x k ) can be written as the product of the two marginal densities, e.g., then X j and X k are independent. f jk (x j, x k ) = f j (x j )f k (x k ), More generally, the p dimensional random vector X has mutually independent elements if the p dimensional joint density function can be written as the product of the p univariate marginal densities. If two random variables X j and X k are independent, then their covariance is equal to 0. [Converse is not always true.] 77
42 Mean Vectors and Covariance Matrices (cont d) We use µ to denote the p 1 vector of marginal population means and use Σ to denote the p p population variancecovariance matrix: Σ = E [ (X µ)(x µ) ]. If we carry out the multiplication (outer product)then Σ is equal to: E (X 1 µ 1 ) 2 (X 1 µ 1 )(X 2 µ 2 ) (X 1 µ 1 )(X p µ p ) (X 2 µ 2 )(X 1 µ 1 ) (X 2 µ 2 ) 2 (X 2 µ 2 )(X p µ p )... (X p µ p )(X 1 µ 1 ) (X p µ p )(X 2 µ 2 ) (X p µ p ) 2. 78
43 Mean Vectors and Covariance Matrices (cont d) By taking expectations elementwise we find that Σ = σ 11 σ 12 σ 1p σ 21 σ 22 σ 2p... σ p1 σ p2 σ pp. Since σ jk = σ kj for all j k we note that Σ is symmetric. Σ is also nonnegative definite 79
44 Correlation Matrix The population correlation matrix is the p p matrix with offdiagonal elements equal to ρ jk and diagonal elements equal to 1. 1 ρ 12 ρ 1p ρ 21 1 ρ 2p... ρ p1 ρ p2 1. Since ρ ij = ρ ji the correlation matrix is symmetric The correlation matrix is also nonnegative definite 80
45 Correlation Matrix (cont d) The p p population standard deviation matrix V 1/2 is a diagonal matrix with σ jj along the diagonal and zeros in all offdiagonal positions. Then Σ = V 1/2 P V 1/2 and the population correlation matrix is (V 1/2 ) 1 Σ(V 1/2 ) 1 Given Σ, we can easily obtain the correlation matrix 81
46 Partitioning Random vectors If we partition the random p 1 vector X into two components X 1, X 2 of dimensions q 1 and (p q) 1 respectively, then the mean vector and the variancecovariance matrix need to be partitioned accordingly. Partitioned mean vector: [ X1 ] [ E(X1 ) ] [ µ1 ] E(X) = E X 2 = E(X 2 ) = µ 2 Partitioned variancecovariance matrix: Σ = [ V ar(x 1 ) Cov(X 1, X 2 ) Cov(X 2, X 1 ) V ar(x 2 ) ] [ ] Σ11 Σ 12 = Σ 12 Σ 22 where Σ 11 is q q, Σ 12 is q (p q) and Σ 22 is (p q) (p q)., 82
47 Partitioning Covariance Matrices (cont d) Σ 11, Σ 22 are the variancecovariance matrices of the subvectors X 1, X 2, respectively. The offdiagonal elements in those two matrices reflect linear associations among elements within each subvector. There are no variances in Σ 12, only covariances. These covariancs reflect linear associations between elements in the two different subvectors. 83
48 Linear Combinations of Random variables Let X be a p 1 vector with mean µ and variance covariance matrix Σ, and let c be a p 1 vector of constants. Then the linear combination c X has mean and variance: E(c X) = c µ, and V ar(c X) = c Σc In general, the mean and variance of a q 1 vector of linear combinations Z = C q p X p 1 are µ Z = Cµ X and Σ Z = CΣ X C. 84
49 CauchySchwarz Inequality We will need some of the results below to derive some maximization results later in the course. CauchySchwarz inequality Let b and d be any two p 1 vectors. Then, (b d) 2 (b b)(d d) with equality only if b = cd for some scalar constant c. Proof: The equality is obvious for b = 0 or d = 0. For other cases, consider b cd for any constant c 0. Then if b cd 0, we have 0 < (b cd) (b cd) = b b 2c(b d) + c 2 d d, since b cd must have positive length. 85
50 CauchySchwarz Inequality We can add and subtract (b d) 2 /(d d) to obtain 0 < b b 2c(b d)+c 2 d d (b d) 2 d d +(b d) 2 d d = b b (b d) 2 Since c can be anything, we can choose c = b d/d d. Then, 0 < b b (b d) 2 d (b d) 2 < (b b)(d d) d for b cd (otherwise, we have equality). d d +(d d) ( c b d d d ) 2 86
51 Extended CauchySchwarz Inequality If b and d are any two p 1 vectors and B is a p p positive definite matrix, then (b d) 2 (b Bb)(d B 1 d) with equality if and only if b = cb 1 d or d = cbb for some constant c. Proof: Consider B 1/2 = p Then we can write i=1 λi e i e i, and B 1/2 = p i=1 1 ( λ i ) e ie i. b d = b Id = b B 1/2 B 1/2 d = (B 1/2 b) (B 1/2 d) = b d. To complete the proof, simply apply the CauchySchwarz inequality to the vectors b and d. 87
52 Optimization Let B be positive definite and let d be any p 1 vector. Then max x 0 (x d) 2 x Bx = d B 1 d is attained when x = cb 1 d for any constant c 0. Proof: By the extended CauchySchwartz inequality: (x d) 2 (x Bx)(d B 1 d). Since x 0 and B is positive definite, x Bx > 0 and we can divide both sides by x Bx to get an upper bound (x d) 2 x Bx d B 1 d. Differentiating the left side with respect to x shows that maximum is attained at x = cb 1 d. 88
53 Maximization of a Quadratic Form on a Unit Sphere B is positive definite with eigenvalues λ 1 λ 2 λ p 0 and associated eigenvectors (normalized) e 1, e 2,, e p. Then max x 0 x Bx x x = λ 1, attained when x = e 1 min x 0 x Bx x x = λ p, attained when x = e p. Furthermore, for k = 1, 2,, p 1, x Bx max x e 1,e 2,,e k x x = λ k+1 is attained when x = e k+1. See proof at end of chapter 2 in the textbook (pages 8081). 89
Lecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationVAR Model. (kvariate) VAR(p) model (in the Reduced Form): Y t2. Y t1 = A + B 1. Y t + B 2. Y tp. + ε t. + + B p. where:
VAR Model (kvariate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t1 + B 2 Y t2 + + B p Y tp + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationCalculus II  Basic Matrix Operations
Calculus II  Basic Matrix Operations Ryan C Daileda Terminology A matrix is a rectangular array of numbers, for example 7,, 7 7 9, or / / /4 / / /4 / / /4 / /6 The numbers in any matrix are called its
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationMatrices and Determinants
Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More information[POLS 8500] Review of Linear Algebra, Probability and Information Theory
[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming
More informationLECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT 204  FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a ndimensional euclidean vector
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationProblem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.
CSE 151: Introduction to Machine Learning Winter 2017 Problem Set 1 Instructor: Kamalika Chaudhuri Due on: Jan 28 Instructions This is a 40 point homework Homeworks will graded based on content and clarity
More informationCHAPTER 8: Matrices and Determinants
(Exercises for Chapter 8: Matrices and Determinants) E.8.1 CHAPTER 8: Matrices and Determinants (A) means refer to Part A, (B) means refer to Part B, etc. Most of these exercises can be done without a
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vectorvalued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationMatrices and Deformation
ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13  Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a ddimensional
More informationidentity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij
Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting
More informationL3: Review of linear algebra and MATLAB
L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo GutierrezOsuna
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationChapter 8. Rigid transformations
Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no builtin routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending
More informationSolutions for Econometrics I Homework No.1
Solutions for Econometrics I Homework No.1 due 20060220 Feldkircher, Forstner, Ghoddusi, Grafenhofer, Pichler, Reiss, Yan, Zeugner Exercise 1.1 Structural form of the problem: 1. q d t = α 0 + α 1 p
More informationLinear Algebra: Linear Systems and Matrices  Quadratic Forms and Deniteness  Eigenvalues and Markov Chains
Linear Algebra: Linear Systems and Matrices  Quadratic Forms and Deniteness  Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems
More informationLecture 1 Introduction
L. Vandenberghe EE236A (Fall 201314) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationLecture 3: QRFactorization
Lecture 3: QRFactorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QRfactorization of matrices It also outlines some applications of this factorization
More informationDepartment of Biostatistics Department of Stat. and OR. Refresher course, Summer Linear Algebra. Instructor: Meilei Jiang (UNC at Chapel Hill)
Department of Biostatistics Department of Stat. and OR Refresher course, Summer 216 Linear Algebra Original Author: Oleg Mayba (UC Berkeley, 26) Modified By: Eric Lock (UNC, 21 & 211) Gen Li (UNC, 212)
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationPrincipal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays
Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays Prof. Tesler Math 283 Fall 2015 Prof. Tesler Principal Components Analysis Math 283 / Fall 2015
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationMASTER OF SCIENCE IN ANALYTICS 2014 EMPLOYMENT REPORT
MASTER OF SCIENCE IN ANALYTICS 204 EMPLOYMENT REPORT! Results at graduation, May 204 Number of graduates: 79 MSA 205 Number of graduates seeking new employment: 75 Percent with one or more offers of employment
More informationEigenvalues and Eigenvectors 7.2 Diagonalization
Eigenvalues and Eigenvectors 7.2 Diagonalization November 8 Goals Suppose A is square matrix of order n. Provide necessary and sufficient condition when there is an invertible matrix P such that P 1 AP
More informationMaths for Signals and Systems Linear Algebra for Engineering Applications
Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 12, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON
More informationMatrices. Math 240 Calculus III. Wednesday, July 10, Summer 2013, Session II. Matrices. Math 240. Definitions and Notation.
function Matrices Calculus III Summer 2013, Session II Wednesday, July 10, 2013 Agenda function 1. 2. function function Definition An m n matrix is a rectangular array of numbers arranged in m horizontal
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationDefinition (T invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (nonzero) vector
More informationPh.D. Katarína Bellová Page 1 Mathematics 2 (10PHYBIPMA2) EXAM  Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.
PhD Katarína Bellová Page 1 Mathematics 2 (10PHYBIPMA2 EXAM  Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system
More informationProblem Set # 1 Solution, 18.06
Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone
More informationTopic 2 Quiz 2. choice C implies B and B implies C. correctchoice C implies B, but B does not imply C
Topic 1 Quiz 1 text A reduced rowechelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correctchoice may have 0, 1, 2, or 3 choice may have 0,
More informationv = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :
Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More informationMath 2331 Linear Algebra
5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues ShangHuan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ ShangHuan Chiu,
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationEIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only nonzero entries are on the diagonal This is equivalent to
More informationModelling and Mathematical Methods in Process and Chemical Engineering
FS 07 February 0, 07 Modelling and Mathematical Methods in Process and Chemical Engineering Solution Series. Systems of linear algebraic equations: adjoint, determinant, inverse The adjoint of a (square)
More information(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).
Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y
More informationMathematical foundations. Our goal here is to present the basic results and definitions from linear algebra, Appendix A. A.
Appendix A Mathematical foundations Our goal here is to present the basic results and definitions from linear algebra, probability theory, information theory and computational complexity that serve as
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about rewriting our data using a new set of variables, or a new basis.
More informationDepartment of Biostatistics Department of Stat. and OR. Refresher course, Summer Linear Algebra
Department of Biostatistics Department of Stat. and OR Refresher course, Summer 2017 Linear Algebra Original Author: Oleg Mayba (UC Berkeley, 2006) Modified By: Eric Lock (UNC, 2010 & 2011) Gen Li (UNC,
More informationBoolean InnerProduct Spaces and Boolean Matrices
Boolean InnerProduct Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationGame Engineering: 2D
Game Engineering: 2D CS4202010F06 2D Math David Galles Department of Computer Science University of San Francisco 060: Back to Basics A Vector is a displacement Vector has both direction and length
More information3.7 Constrained Optimization and Lagrange Multipliers
3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the leastsquares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationLinear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky
Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky Linear Algebra Igor Yanovsky, 5 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation.
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Nonlinear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationLinear Algebra Problems
Math 504 505 Linear Algebra Problems Jerry L. Kazdan Topics 1 Basics 2 Linear Equations 3 Linear Maps 4 Rank One Matrices 5 Algebra of Matrices 6 Eigenvalues and Eigenvectors 7 Inner Products and Quadratic
More informationInner Product Spaces 5.2 Inner product spaces
Inner Product Spaces 5.2 Inner product spaces November 15 Goals Concept of length, distance, and angle in R 2 or R n is extended to abstract vector spaces V. Sucn a vector space will be called an Inner
More informationComputational Aspects of Aggregation in Biological Systems
Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationReal symmetric matrices/1. 1 Eigenvalues and eigenvectors
Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that
More informationIntroduction  Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction  Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationClassical differential geometry of twodimensional surfaces
Classical differential geometry of twodimensional surfaces 1 Basic definitions This section gives an overview of the basic notions of differential geometry for twodimensional surfaces. It follows mainly
More informationMath 20F Practice Final Solutions. Jorel Briones
Math 2F Practice Final Solutions Jorel Briones Jorel Briones / Math 2F Practice Problems for Final Page 2 of 6 NOTE: For the solutions to these problems, I skip all the row reduction calculations. Please
More informationGATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS
SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationLinear Algebra Final Exam Study Guide Solutions Fall 2012
. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize
More informationFullState Feedback Design for a MultiInput System
FullState Feedback Design for a MultiInput System A. Introduction The openloop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationPoisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.
Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +
More informationClifford Algebras and Spin Groups
Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of
More informationLinear Algebra. Carleton DeTar February 27, 2017
Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding
More information1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by
Seminar 1 1. Which ones of the usual symbols of addition, subtraction, multiplication and division define an operation (composition law) on the numerical sets N, Z, Q, R, C? 2. Let A = {a 1, a 2, a 3 }.
More informationMAT 242 CHAPTER 4: SUBSPACES OF R n
MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationUp to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas
Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Eigenvalues are the key to a system of n differential equations : dy=dt ay becomes dy=dt Ay. Now A is a matrix and y is a vector.y.t/;
More information2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b
. FINITEDIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,
More information18.06 Problem Set 8  Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8  Solutions Due Wednesday, 4 November 2007 at 4 pm in 206 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationAn Introduction To Linear Algebra. Kuttler
An Introduction To Linear Algebra Kuttler April, 7 Contents Introduction 7 F n 9 Outcomes 9 Algebra in F n Systems Of Equations Outcomes Systems Of Equations, Geometric Interpretations Systems Of Equations,
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3space, with a point a S where S looks smooth, i.e., without any fold or cusp or selfcrossing, we can intuitively define the tangent
More informationLecture 23: Trace and determinants! (1) (Final lecture)
Lecture 23: Trace and determinants! (1) (Final lecture) Travis Schedler Thurs, Dec 9, 2010 (version: Monday, Dec 13, 3:52 PM) Goals (2) Recall χ T (x) = (x λ 1 ) (x λ n ) = x n tr(t )x n 1 + +( 1) n det(t
More informationFinite Math  Jterm Section Systems of Linear Equations in Two Variables Example 1. Solve the system
Finite Math  Jterm 07 Lecture Notes  //07 Homework Section 4.  9, 0, 5, 6, 9, 0,, 4, 6, 0, 50, 5, 54, 55, 56, 6, 65 Section 4.  Systems of Linear Equations in Two Variables Example. Solve the system
More informationLinear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus
Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible 
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188196. [SB], Chapter 26, p.719739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationVector Spaces and Subspaces
Vector Spaces and Subspaces Vector Space V Subspaces S of Vector Space V The Subspace Criterion Subspaces are Working Sets The Kernel Theorem Not a Subspace Theorem Independence and Dependence in Abstract
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationTOPIC III LINEAR ALGEBRA
[1] Linear Equations TOPIC III LINEAR ALGEBRA (1) Case of Two Endogenous Variables 1) Linear vs. Nonlinear Equations Linear equation: ax + by = c, where a, b and c are constants. 2 Nonlinear equation:
More informationMultivariate Distributions (Hogg Chapter Two)
Multivariate Distributions (Hogg Chapter Two) STAT 451: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous
More information