Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization

Size: px
Start display at page:

Download "Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization"

Transcription

1 Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization January 9, Notation 1. Book pages without explicit citation refer to [1]. 2. R n denotes the set of real n-column vectors. 3. R m n denotes the set of real m n real matrices. 4. S n denotes the set of n n symmetric matrix, S n = {X R n n X = X T }; S n + denotes the set of n n positive semidefinite matrix, S n + = {X S n X 0}; S n ++ denotes the set of n n positive definite matrix S n ++ = {X S n X 0}. For the definition of positive definite and semidefinite matrices, see Sec Terminology and properties 1. The trace of a matrix X R n n is defined as tr(x) = n x ii = n λ i, (1) where x ii and λ i, i = 1,..., n, are diagonal elements and eigenvalues of X, respectively. Some properties are tr(a + B) = tr(a) + tr(b) (2) tr(ca) = ctr(a) (3) tr(cd) = tr(dc) (4) tr(qaq 1 ) = tr(q 1 QA) = tr(a) (5) 2. The principal minors of order k of a matrix X R n n is the determinant of a square submatrix of X formed by deleting n k rows and n k columns with the same indices. The leading principal minors of order k of X is the determinant of a square submatrix of X formed by deleting the last n k rows and columns. Example 1 (Principal minors [7] ). 1

2 List all the principal minors of the 3 3 matrix: a 11 a 12 a 13 A = a 21 a 22 a 23 (6) a 31 a 32 a 33 Answer: There is one third order principal minor of A, det(a). There are three second order principal minors: a 11 a 12 a 21 a 22 a 11 a 13 a 31 a 33 a 22 a 23 a 32 a 33, formed by deleting the third row and column of A., formed by deleting the second row and column of A., formed by deleting the first row and column of A. There are also three first order principal minors: a 11, by deleting the last two rows and columns; a 22, by deleting the first and last rows and columns; and a 33, by deleting the first two rows and columns. Example 2 (Leading Principal minors [7]). List the first, second, and third order leading principal minors of the 3 3 matrix: a 11 a 12 a 13 A = a 21 a 22 a 23 (7) a 31 a 32 a 33 Answer: There are three leading principal minors, one of order 1, one of order 2, and one of order 3: 1. a 11, formed by deleting the last two rows and columns of A. 2. a 11 a 12 a 21 a 22, formed by deleting the last row and column of A. a 11 a 12 a a 21 a 22 a 23, formed by deleting no rows or columns of A. a 31 a 32 a Two matrices A, B R n n are said to be congruent if there exists a nonsingular matrix Q R n n such that A = QBQ T. If A, B are congruent and B is symmetric then the number of positive, negative and zero eigenvalues of A, B are the same. Therefore, if B 0 then A 0. 2

3 4. Let A R m n. The range or column space of A, denoted by R(A) R m, is the set of all linear combinations (span) of the columns of A, R(A) = {Ax x R n }. (8) dim R(A) = r. = ranka min{m, n}. A matrix is full rank if r = min{m, n}. The nullspace or kernal of A, denoted by N (A) R n, is the set of vectors mapped into zero by A, N (A) = {x Ax = 0}. (9) dim N (A) = n r. The row space of A, denoted by R(A T ) R n, is the set of all linear combinations (span) of its rows, R(A T ) = {A T x x R m }. (10) dim R(A T ) = r. The left nullspace of A, denoted by N (A T ) R m, is the set of all vectors mapped to zeros by A T, N (A T ) = {x A T x = 0}. (11) dim N (A T ) = m r. 3 Inner product and norms 1. The standard inner product on R n is defined as x, y = x T y = n x i y i (12) for x, y R n. The matrix inner product on R m n is given by X, Y = tr(x T Y ) = n n X ij Y ij (13) for X, Y R m n. Note that the inner product of two matrices is the inner product of the associated vectors in R mn, obtained by stacking the elements of the matrices. 2. A real-valued function x for all x R n is said to be a norm if x 0; x = 0 if and only if x = 0 (Positivity) αx = α x for α R (Homogeneity) x + y x + y (Triangle inequality) A matrix norm on R m n can be defined similarly. The vector norm is the measure of the length of the vector. 3

4 3. The vector l 1 -norm on R n is defined as x 1 = that is the sum of absolute value of each entry. n x i, (14) The matrix nuclear norm or trace norm on R m n is given by r X = σ i (X) = tr((x T X) 1/2 ), (15) where r is the rank of X. In this note, σ i (X) denotes the i-th largest singular value of the rectangular matrix X, which is also equal to the square root of the i-th largest eigenvalue of the square matrix X T X. The nuclear norm is just the l 1 -norm of the vector of singular values. The matrix max-column-sum norm on R n is m X 1 = X ij = sup{ Xu 1 u 1 1}. (16) max j=1,...,n Example 3 (Sparse and low-rank structure). Eg. Support vector classifier, pp The vector l 2 -norm or Euclidean norm on R n is defined as ( n ) 1/2 x 2 = (x T x) 1/2 = x 2 i. (17) The Frobenius norm on R m n is given by ( r ) 1/2 X F = σ i (X) 2 = ( tr(x T X) ) 1/2 m = n j=1 X 2 ij 1/2. (18) The Frobenius norm is the l 2 -norm of the vector of singular values. Also, it is the Euclidean norm (or l 2 -norm) of the vector obtained by stacking the elements of the matrix. 5. The vector l -norm or Chebyshev norm on R n is defined as x = max{ x 1,..., x n }. (19) The spectral norm on R m n is given by its maximum singular value, X 2 = σ 1 (X). (20) The spectral norm is the l -norm of the vector of singular values. The spectral norm gives the largest amplification factor or maximum gain of X. The max-row-sum norm on R m n is n X = max X ij = sup{ Xu u 1}. (21),...,m j=1 4

5 6. The above defined vector l 1, l 2, l norms are three special cases of a family of norms, l p -norm on R n with p 1 ( n ) 1/p x p = x p i. (22) The above matrix max-column-sum, Frobenius, max-row-sum norms are three special cases of a family of operator norms on R m n defined as X a,b = sup{ Xu a u b 1}. (23) The operator norms with a = b = 1, 2, reduce to the matrix max-column-sum, Frobenius, max-row-sum norms, respectively. Example 4 (Ball and ellipsoid). {x x T P x 1} 7. The dual norm, denoted associated with norm on R n is defined as z = sup{z T x x 1} (24) The dual norm associated with the matrix norm on R m n is Z = sup{tr(z T X) X 1} (25) The dual norm of a dual norm is the original norm, x = x for all x. For vector norms, the l 1 -norm is the dual norm of the l -norm; the l 2 -norm is the dual norm of itself. For matrix norms, the nuclear norm is the dual norm of the spectral norm; the Frobenius norm is the dual norm of itself. Generally, the dual norm of the l p -norm is the l q -norm, where p, q satisfy 1 p + 1 q = 1. Since the inequality for all x, z, thus it is equivalent to the Hölder s Inequality z T x x z (26) z T x z p x q (27) where 1 p + 1 q Inequality. = 1. The special case p = q = 2 is often referred as the Cauchy-Schwarz z T x x 2 z 2 (28) tr(z T X) X F Z F. (29) 8. Inequalities related to norms. For all x R n, the following inequalities hold x x 2 x 1 n x 2 n x. (30) For all X R m n with rank(x) = r, the following inequalities hold X 2 X F X 2 r X F r X 2. (31) 5

6 4 Projection 1. A matrix P R n n is called a projection matrix if it satisfies P 2 = P, i.e. P is idempotent. The range R(P ) and nullspace N (P ) are disjoint linear subspaces of R n such that R(P ) + N (P ) = R n and P x = x for x R(P ). For any projection P, I P is also a projection with R(I P ) = N (P ) and N (I P ) = R(P ). 2. A projection matrix P is an orthogonal projection matrix if and only if R(P ) N (P ) if and only if P = P T. Given a subspace S in R n, there exists one and only one orthogonal projection P such that R(P ) = S. Let x 0 R n then for all y S, y P x 0, i.e. x 0 P x 0 = inf y S x 0 y. Example 5. Some examples of projection matrices P S n. Projection onto a one-dimensional subspace (line) x 0 P x 0 < x 0 y (32) P = aat a T a where a is any non-zero vector along the line. If a is normalized, then the projection is simply P = aa T. It is easy to verify that P is symmetric and that P 2 = P. Least square problem. There is no general solution to an underdetermined system of equations Ax = b (34) where A R m n has full column rank, m > n, b R n. However, we can find an x that minimizes b Ax 2. To do so, we can project b onto the range of A, R(A), to minimize the distance between b and R(A). To find y = Ax R(A) such that the error e = b y is perpendicular to R(A), we have that is, (33) a T 1 (b Ax) = 0,..., a T n (b Ax) = 0 (35) A T (b Ax) = 0 A T Ax = A T b (36) Since A has full column rank, A T A has the full rank, then the optimal x = (A T A) 1 A T b. Therefore, P b = A(A T A) 1 A T b (37) and P = A(A T A) 1 A T (38) This is also called the pseudo-inverse of A R m n when m > n. 6

7 5 Matrix decomposition 5.1 Eigenvalue decomposition Suppose a square matrix A R n n. 1. A nonzero vector q R n is called an eigenvector of A if there exists a scalar λ such that or The scaler λ is an eigenvalue of A. This can also be written as Aq = λq (39) det(λi A) = 0. (40) AQ = QΛ (41) where Q = [q 1,..., q n ] and Λ = diag(λ 1,..., λ n ). The eigenvalues are roots of the characteristic polynomial det(si A). Any matrix A R n n has n eigenvalues. The eigenvectors associated with a single eigenvalue λ together with the zero vector form a linear vector subspace called an eigenspace. 2. The algebraic multiplicity ζ of an eigenvalue ζ is the multiplicity of the corresponding root of the characteristic polynomial. The geometric multiplicity η of an eigenvalue λ is the dimension of the associated eigenspace, i.e.dim N (λi A). For all eigenvalues, η ζ. rank(a) = dim R(A) = N λ η i, where N λ is the number of distinct eigenvalues of A. If ζ = η for all eigenvalues of A, i.e.a has a set of linearly independent eigenvectors, then A is said to be diagonalizable or nondefective. If all eigenvalues of A are distinct, A is diagonalizable. 3. A is said to be diagonalizable if A can be factored as A = QΛQ 1 (42) where Q is invertible. This is called the eigenvalue decompostision or spectral decomposition of A. 4. Suppose A S n be a symmetric matrix. All eigenvalues of a symmetric matrix are real. A can be factored as n A = QΛQ T = λ i q i qi T (43) where Q R n n is orthonormal, i.e.q T Q = I, and Λ = diag(λ 1,..., λ n ). Usually the (real) engenvalues are ordered, i.e., λ i (A) is the i-th largest eigenvalue of A. 5. The k-th(k 1) power of A S n is defined as A k = QΛ k Q T. 7

8 5.2 Singular value decomposition and pseudo-inverse Suppose A R m n with ranka = r. 1. A can be factored as A = UΣV T = r σ i u i vi T (44) where U R m r is an orthonormal matrix of left singular vectors, U T U = I, V R n r is an orthonormal matrix of right singular vectors, V T V = I, Σ = diag(σ 1,..., σ r ) is a diagonal matrix of ordered singular values σ 1... σ r > 0. This is called singular value decomposition of A. 2. The singular value decomposition of A is related to the eigenvalue decomposition of the symmetric matrix A T A and AA T. [ ] A T A = V ΣU T UΣV T = V Σ 2 V T Σ = [V Ṽ ] 2 0 [V 0 0 Ṽ ]T (45) AA T = UΣV T V ΣU T = UΣ 2 U T = [U Ũ] [ Σ ] [U Ũ]T (46) where Ṽ, Ũ are any matrices for which [V Ṽ ] and [U Ũ] are orthonormal. The righthand expressions are eigenvalue decompositions of A T A and AA T. The singular values σ i are the squareroots of eigenvalues of A T A and AA T, i.e. σ i (A) = λi (A T A) = λ i (AA T ) (λ i (A T A) = λ i (AA T ) = 0 for i > r). The left singular vectors U = [u 1,..., u r ] are eigenvectors of AA T and also an orthonormal basis for R(A). The right singular vectors V = [v 1,..., v r ] are eigenvectors of A T A and also an orthonormal basis for R(A T ). 3. Let A = U 1 Σ 1 V1 T be the singular value decomposition of A. The full singular value decomposition of A is [ A = U 1 Σ 1 V1 T = [U 1 U 2 ] Σ 1 0 r (n r) 0 (m r) r 0 (m r) (n r) ] [ V T 1 V T 2 ] (47) where U 2 R m (m r), V 2 R n (n r) are any matrices for which [U 1 [V 1 V 2 ] R n n are orthonormal. U 2 ] R m m and U 1 is an orthonormal basis of R(A). V 1 is an orthonormal basis of R(A T ). U 2 is an orthonormal basis of N (A T ). V 2 is an orthonormal basis of N (A). 8

9 Therefore, R(A) is the orthogonal complement of N (A T ), R(A T ) is the orthogonal complement of N (A), and R(A) N (A T ) = R n, R(A T ) N (A) = R n (48) where refers to orthogonal direct sum. 4. Let A has the full singular value decomposition as defined above. The pseudo-inverse or Moore-Penrose inverse of A, denoted by A R n m is defined as A = V 1 Σ 1 U T 1. (49) If ranka = n (tall, i.e. m > n, and full rank), then A = (A T A) 1 A T ; if ranka = m (fat, i.e. m < n, and full rank), then A = A T (AA T ) 1. If A is square and nonsingular, A = A 1. AA = U 1 U1 T Rm m gives projection on R(A). A A = V 1 V1 T Rn n gives projection on R(A T ). I AA = U 2 U2 T Rm m gives projection on N (A T ). I A A = V 2 V T 2 Rn n gives projection on N (A). 6 Quadratic forms and matrix gain Suppose A R n n a square matrix. 1. A function f : R n R is called a quadratic form if it is of the form f(x) = x T Ax = n n A ij x i x j. (50) j=1 In a quadratic form we may assume A = A T since x T Ax = x T ((A + A T )/2)x (51) where (A + A T )/2 is called the symmetric part of A. The antisymmetric part of A is (A A T )/2. Each matrix can be written as the sum of symmetric part and antisymmetric part, A = A + AT 2 + A AT 2. (52) 2. The largest and smallest eigenvalues satisfy x T Ax λ max (A) = λ 1 (A) = sup x 0 x T x, λ x T Ax min(a) = λ n (A) = inf x 0 x T x. (53) Thus for any x, λ min (A)x T x x T Ax λ max (A)x T x. (54) 9

10 Analogously the largest and smallest singular values satisfy x T By By 2 σ max (B) = sup = sup x,y 0 x 2 y 2 y 0 y 2 where this is also the spectral norm of B. and x T By By 2 σ min (B) = inf = inf x,y 0 x 2 y 2 y 0 y 2 To generalize we have the following definition. = sup y 0 = inf y 0 y T B T By = λ max (B y T B) (55) 2 y T B T By = λ min (B y T B). (56) 2 3. The matrix gain or amplification factor of B in the direction x is defined as Bx x. (57) The maximum (minimum) gain direction of B is that of the eigenvector associated with the largest (smallest) eigenvalue. 7 Positive semidefinite matrices Suppose A S n be a real symmetric matrix with eigenvalue decomposition A = QΛQ T. 1. A is said to be positive semidefinite(psd), denoted by A 0, if x T Ax 0 for all x R n. A real symmetric matrix A is said to be positive definite (PD), A 0, if x T Ax > 0 for all x 0, x R n. The set of all positive semidefinite matrices S n + is a proper cone (see definition on pp. 43, Figure 2.12 on pp. 35 and Example 2.24 on pp. 52). 2. A is said to be negative semidefinite, denoted by A 0, if A 0, and is said to be negative definite, denoted by A 0, if A A ( )0 is equivalent to All eigenvalues of A are nonnegative (positive), i.e. λ i (>)0, i = 1,..., n. All the (leading) principle minors of A are nonnegative (positive). There exists a (nonsingular) square matrix B R n n such that A = B T B. Example 6. Let B R m n. B T B 0 since x T B T Bx = Bx for all x Rn. BB T 0 since x T BB T x = B T x for all x Rm. B T B 0 and BB T 0 if B has full rank. Given the observation data X = [x 1,..., x n ] R m n, and i x i = 0. The covariance matrix of X is C = XX T = n x ix T i R m m. The Gram matrix of X is G = X T X R m m, G ij = x T i x j. The covariance matrix and Gram matrix are both positive semidefinite. See more on Gram matrix on pp If A ( )0, then tr(a) (>)0, det(a) (>)0. 10

11 the k-th (0 k 1) root of A is defined as A 1/k = QΛ 1/k Q T. Especially the square root of A is A 1/2 = QΛ 1/2 Q T. If B ( )0 then the inner product tr(ab) ( )0, the Hadamard product A B = (A ij B ij ) ( )0. However the matrix product AB 0 only when AB = BA. 5. If A 0, then A 1 0. A can be factored as A = LL T (58) where L is lower triangular and nonsingular with positive diagonal elements. This called the Cholesky factorization of A. See solving positive definite sets of equations using Cholesky factorization on pp Let A, B, C, D R n n. The matrix inequalities(partial order on R n ) are defined as A B if A B 0; A B if A B 0. Many standard properties holds: Addition: if A ( )B, C D then A + C ( )B + D. Especially if A ( )0, B 0 then A + B ( )0. Nonnegative (positive) scaling: if A ( )B, α (>)0 then αa ( )αb. Especially if A ( )0, α (>)0 then αa (>)0. Transition: if A ( )B, B ( )C then A ( )C. Reflexive: A A. Antisymmetric: if A B, B A then A = B. If A B then for C, D small enough, A + C B + D. If A 0, B 0, A B then λ i (A) λ i (B), i = 1,..., n, tra trb, det(a) det(b). See more on generalized inequalities and properties on pp Schur complement 1. Let X R n n be [ ] A B X = C D where A R k k, B R k (n k), C R (n k) k, D R (n k) (n k). Assume A is nonsingular, to solve the linear equation [ ] [ ] [ ] A B x u = (60) C D y v we eliminate x from the top block equation (59) x = A 1 (u By). (61) Then substitute it into the bottom block equation and, if (D CA 1 B) 1 is nonsingular, obtain y = (D CA 1 B) 1 (v CA 1 u) = (D CA 1 B) 1 CA 1 u + (D CA 1 B) 1 v. (62) 11

12 Substituting it to the first block equation yields x = (A 1 + A 1 B(D CA 1 B) 1 CA 1 )u A 1 B(D CA 1 B) 1 v. (63) The Schur complement of A in X is defined as S = D CA 1 B R (n k) (n k), (64) and x and y can be written in terms of S: x = (A 1 + A 1 BS 1 CA 1 )u A 1 BS 1 v, (65) y = S 1 CA 1 u + S 1 v. (66) These two equations yield a formular for the inverse of a block matrix [ ] 1 [ A B A = 1 + A 1 BS 1 CA 1 A 1 BS 1 C D S 1 CA 1 S 1 ]. (67) and [ A B C D ] 1 = It follows immediately that [ A B C D [ I A 1 B 0 I ] [ = I 0 CA 1 I ] [ A S 1 ] [ A 0 0 S ] [ I 0 CA 1 I ] [ I A 1 B 0 I Similarly if D is nonsingular, the Schur complement of D in X is defined as ]. (68) ]. (69) Ŝ = A BD 1 C R k k. (70) Then we have [ ] 1 [ ] [ ] [ A B I 0 = Ŝ 1 0 I BD 1 C D D 1 C I 0 D 1 0 I [ ] [ ] [ ] [ A B I BD 1 Ŝ 0 I 0 = C D 0 I 0 D D 1 C I ]. (71) ]. (72) 2. Let X S n, A be nonsingular, [ A B X = B T D ] [ = I 0 B T A 1 I ] [ A 0 0 S ] [ I A 1 B 0 I ], (73) where S = D B T A 1 B R (n k) (n k) is the Schur complement of A in X, A S k, B R k (n k), D S n k. Note that X and [ ] A 0 Y = (74) 0 S are congruent matrices, therefore, if X 0 then Y 0. The block diagonal matrix Y is positive semidefinite if and only if each diagonal block is positive semidefinite. Then the characterization of positive definite or semidefinite block matrix of X are as follows, 12

13 X 0 if and only if A 0 and S 0. If A 0, then X 0 if and only if S The interpretation of the Schur complement from minimizing a quadratic form on pp. 650 or pp The Schur complement can be generalized to the case when A is singular. See more on pp Multivariate calculus 1. Suppose that a function f is differentiable in its domain and x int domf (the interior of the domain of f). The gradient of a real-valued function f : R n R at x is the vector f(x) R n with elements f(x) i = f x i, i = 1,..., n. (75) The Jacobian of a vector-valued function f : R n R m at x is the matrix Df(x) R m n with elements Df(x) ij = f i x j. (76) When f is real-valued, the gradient is the transpose of the Jacobian, f(x) = Df(x) T. The first-order approximation of f at a point x is ˆf(z) = f(x) + f(x) T (z x). (77) 2. Suppose that a real-valued function f : R n R is twice differential in its domain and x int domf. The Hessian matrix or second-order derivative of f at x is the matrix 2 f(x) R n n with elements 2 f(x) ij = 2 f(x). (78) x i x j The second-order approximation of f at a point x is ˆf(z) = f(x) + f(x) T (z x) (z x)t 2 f(x)(z x). (79) Example 7. f(x) = b T x = x T b where b and x R n, f(x) = b (80) f(x) = Ax where A R m n, Df(x) = A (81) 13

14 f(x) = x T x, f(x) = x T Ax where A R n n, f(x) = 2x (82) f(x) = (A + A T )x (83) 2 f(x) = A + A T (84) If A is symmetric, i.e.a S n, then x T Ax = 2Ax, 2 x T Ax = 2A. f(x) = log det X where X S n ++ (pp. 641), The Hessian has a quadratic form where U, V S n. f(x) = X 1 (85) U 2 f(x)v = tr(x 1 UX 1 V ) (86) More examples in the first- and second-order conditions of convex functions on pp Suppose f : R n R m and g : R m R are differentiable. The composition h : R n R is defined as h(x) = g(f(x)). The gradient of h is This follows from the general chain rule As an example, suppose f : R n R and g : R R, h(x) = Df(x) g(f(x)). (87) Dh(x) = Dg(f(x))Df(x). (88) h(x) = g (f(x)) f(x). (89) and Example 8. R m, 2 h(x) = g (f(x)) 2 f(x) + g (f(x)) f(x) f(x) T. (90) Composition with affine function: f(x) = Ax + b, where A R m n, b h(x) = A T g(ax + b) (91) 2 h(x) = A T 2 g(ax + b)a. (92) Log-sum-exponential-affine: h : R n R, h(x) = log m exp(a T i x + b i ) (93) where a 1,... a m R n and b 1,..., b m R, is composted by f : R n R m and g : R m R, f(x) = Ax + b (94) 14

15 where A R m n with rows a T 1,..., at m, b = [b 1,..., b m ] T, and with Therefore, which can be written as g(y) = g(y) = log m exp y i (95) 1 m exp y [exp y 1,..., exp y m ] T, (96) i 2 g(x) = diag( g(y)) g(y) g(y) T. (97) h(x) = A T g(ax + b), (98) 2 h(x) = A T 2 g(ax + b)a, (99) h(x) = AT z 1 T z, ) (100) A, (101) 2 h(x) = A T ( diag(z) 1 T z zzt (1 T z) 2 where z i = exp(a T i x + b i), i = 1,..., m. See more on log-sum-exp function on pp Basic analysis 1. x C R n is an interior point of C if there exists an ϵ > 0 for which {y y x ϵ} C, i.e.there exists a ball centered at x that lies entirely in C. The interior of C, intc, is the set of all interior points of C. The complement of a set C is defined as C c = R n \C = {x R n x / C}. 2. A set C is open if for any x C, there exists ϵ > 0 for which {y y x ϵ} C,i.e. intc = C. A set C is closed if its complement is open. Any union of open sets is open. The intersection of finitely many open sets is open. Any intersection of closed sets is closed. The union of finitely many closed sets is closed. An alternative definition of a closed set is that it contains the limits of all convergent sequences in C. 3. A set C is bounded if there exists M > 0 such that for all x C, x M. A set C is compact if it is closed and bounded. Every continuous function on a compact set attains its extreme values on that set. 4. The closure of a set C is clc = R n \int(r n \C), i.e.the complement of the interior of the complement of C. If C is closed, clc = C. 5. The boundary of a set C is bdc = clc\intc. C is closed if it contains its boundary, bdc C. C is open if contains no boundary points, C bdc =. 6. The relative interior of a set C, relintc, is its interior relative to affc relintc = {x C B(x, r) affc C for some r > 0}, (102) where B(x, r) = {y y x r}. The relative boundary of a set C is clc\relintc. 15

16 11 Acknowledgements This note was revised by Yongjin Lee and Reza Eghbali from the version of Jan 8, 2013 by De Dennis Meng. We would like to acknowledge the work by previous TAs, Brian Hutchinson, Karthik Mohan and De Dennis Meng. References [1] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, [2] R. Horn and C. Johnson, Matrix analysis, Cambridge University Press, [3] Stanford EE263 Linear Dynamical Systems course materials, [4] J. Gallier, The schur complement and symmetric positive semidefinite and definite matrices, [5] J. Burke, Review notes for UW Math408, [6] H. L. Royden and P. M. Fitzpatrick, Real analysis, Prentice Hall, [7] J. Wilde, I. Tecu, and T. Suzuki, Linear Algebra II: Quadratic Forms and Definiteness, 16

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Norms for Vectors and Matrices

Norms for Vectors and Matrices Norms for Vectors and Matrices 1 Vector Norms Definition 1.1 A vector norm is a function f : R m R that satisfies the following properties 1. f(x) 0, x R m 2. f(x) iff x = 0 3. f(x + y) f(x) + f(y), x,

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics 10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Contents 1 Basic Concepts and Notations 2 2 Matrix Operations and Properties 3 21 Matrix Multiplication 3 211 Vector-Vector Products 3 212 Matrix-Vector Products 4 213 Matrix-Matrix

More information

Taxonomy of n n Matrices. Complex. Integer. Real. diagonalizable. Real. Doubly stochastic. Unimodular. Invertible. Permutation. Orthogonal.

Taxonomy of n n Matrices. Complex. Integer. Real. diagonalizable. Real. Doubly stochastic. Unimodular. Invertible. Permutation. Orthogonal. Doubly stochastic Taxonomy of n n Matrices Each rectangle represents one class of complex n n matrices. Arrows indicate subset relations. Classes in green are closed under multiplication. Classes in blue

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103 Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino September 2013 Basilio Bona (DAUIN) Matrices September 2013 1 / 74 Definitions Definition A matrix is a set of N real or complex numbers

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Common-Knowledge / Cheat Sheet

Common-Knowledge / Cheat Sheet CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]

More information

ANSWERS. E k E 2 E 1 A = B

ANSWERS. E k E 2 E 1 A = B MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

E 231, UC Berkeley, Fall 2005, Packard

E 231, UC Berkeley, Fall 2005, Packard Preliminaries 1. Set notation 2. Fields, vector spaces, normed vector spaces, inner product spaces 3. More notation 4. Vectors in R n, C n, norms 5. Matrix Facts (determinants, inversion formulae) 6. Normed

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information