Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Size: px
Start display at page:

Download "Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as"

Transcription

1 Reading [SB], Ch , p Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x 1 + a 1 x 1 x + a 1 x x 1 + a x. Here each term has degree (the sum of exponents is for all summands). A quadratic form of three variables looks as f(x 1, x, x 3 ) = a 11 x 1 + a 1 x 1 x + a 13 x 1 x 3 + a 1 x x 1 + a x + a 3 x x 3 + a 31 x 1 x 3 + a 3 x 3 x + a 33 x 3. A general quadratic form of n variables is a real-valued function Q : R n R of the form Q(x 1, x,..., x n ) = a 11 x 1 + a 1 x 1 x a 1n x 1 x n + a 1 x x 1 + a x a n x x n a n1 x n x 1 + a n x n x a nn x n In short Q(x 1, x,..., x n ) = n i,j a ij x i x j. As we see a quadratic form is determined by the matrix a a 1n A =.... a n1... a nn 1.1 Matrix Representation of Quadratic Forms Let Q(x 1, x,..., x n ) = n i,j a ij x i x j be a quadratic form with matrix A. Easy to see that a a 1n x 1 Q(x 1,..., x n ) = (x 1,..., x n ) a n1... a nn x n 1

2 Equivalently Q(x) = x T A x. Example. The quadratic ( form Q(x) 1, x, x 3 ) = 5x 1 10x 1 x + x whose 5 5 symmetric matrix is A = is the product of three matrices 5 1 (x 1, x, x 3 ) ( Symmetrization of matrix The quadratic form Q(x 1, x, x 3 ) = 5x 1 10x 1 x + x can be represented, for example, by the following matrices ( the last one is symmetric: a ij = a ji. ), ( ) ), x 1 x x 3. ( Theorem 1 Any quadratic form can be represented by symmetric matrix. Indeed, if a ij a ji we replace them by new a ij = a ji = a ij+a ji, this does not change the corresponding quadratic form. Generally, one can find symmetrization A of a matrix A by A = A+AT. 1. Definiteness of Quadratic Forms A quadratic form of one variable is just a quadratic function Q(x) = a x. If a > 0 then Q(x) > 0 for each nonzero x. If a < 0 then Q(x) < 0 for each nonzero x. So the sign of the coefficient a determines the sign of one variable quadratic form. The notion of definiteness described bellow generalizes this phenomenon for multivariable quadratic forms Generic Examples The quadratic form Q(x, y) = x + y is positive for all nonzero (that is (x, y) (0, 0)) arguments (x, y). Such forms are called positive definite. The quadratic form Q(x, y) = x y is negative for all nonzero arguments (x, y). Such forms are called negative definite. The quadratic form Q(x, y) = (x y) is nonnegative. This means that Q(x, y) = (x y) is either positive or zero for nonzero arguments. Such forms are called positive semidefinite. )

3 The quadratic form Q(x, y) = (x y) is nonpositive. This means that Q(x, y) = (x y) is either negative or zero for nonzero arguments. Such forms are called negative semidefinite. The quadratic form Q(x, y) = x y is called indefinite since it can take both positive and negative values, for example Q(3, 1) = 9 1 = 8 > 0, Q(1, 3) = 1 9 = 8 < Definiteness Definition. A quadratic form Q(x) = x T A x (equivalently a symmetric matrix A) is (a) positive definite if Q(x) > 0 for all x 0 R n ; (b) positive semidefinite if Q(x) 0 for all x 0 R n ; (c) negative definite if Q(x) < 0 for all x 0 R n ; (d) negative semidefinite if Q(x) 0 for all x 0 R n ; (e) indefinite if Q(x) > 0 for some x and Q(x) < 0 for some other x Definiteness and Optimality Determining the definiteness of quadratic form Q is equivalent to determining wether x = 0 is max, min or neither. Particularly: If Q is positive definite then x = 0 is global maximum; If Q is negative definite then x = 0 is global minimum Definiteness of Variable Quadratic Form ( ) ( ) a b Let Q(x 1, x ) = ax 1 + bx 1 x + cx x1 = (x 1, x ) be a b c x variable quadratic ( form. ) a b Here A = is the symmetric matrix of the quadratic form. The b c a b determinant b c = ac b is called discriminant of Q. Easy to see that ax 1 + bx 1 x + cx = a(x 1 + b a x ) + ac b x a. Let us use the notation D 1 = a, D = ac b. Actually D 1 and D are leading principal minors of A. Note that there exists one more principal (non leading) minor (of degree 1) D 1 = c. Then Q(x 1, x ) = D 1 (x 1 + b a x ) + D x D. 1 From this expression we obtain: 3

4 1. If D 1 > 0 and D > 0 then the form is of x + y type, so it is positive definite;. If D 1 < 0 and D > 0 then the form is of x y type, so it is negative definite; 3. If D 1 > 0 and D < 0 then the form is of x y type, so it is indefinite; If D 1 < 0 and D < 0 then the form is of x + y type, so it is also indefinite; Thus if D < 0 then the form is indefinite. Semidefiniteness depends not only on leading principal minors D 1, D but also on all principal minors, in this case on D 1 = c too. 4. If D 1 0, D 1 0 and D 0 then the form is positive semidefinite. Note that only D 1 0 and D 0 is not enough, the additional condition D 1 0 here is absolutely necessary: consider the form Q(x 1, x ) = x with a = 0, b = 0, c = 1, here D 1 = a 0, D = ac b 0, nevertheless the form is not positive semidiefinite. 5. If D 1 0, D 1 0 and D 0 then the form is negative semidefinite. Note that only D 1 0 and D 0 is not enough,the additional condition D 1 0 again is absolutely necessary: consider the form Q(x 1, x ) = x with a = 0, b = 0, c = 1, here D 1 = a 0, D = ac b 0, nevertheless the form is not negative semidiefinite Definiteness of 3 Variable Quadratic Form Let us start with the following Example. Q(x 1, x, x 3 ) = x 1 + x 7x 3 4x 1 x + 8x 1 x 3. The symmetric matrix of this quadratic form is The leading principal minors of this matrix are D 1 = 1 = 1, D = 1 =, D 3 = = 18. 4

5 Now look: where Q(x 1, x, x 3 ) = x 1 + x 7x 3 4x 1 x + 8x 1 x 3 = x 1 4x 1 x + 8x 1 x 3 + x 7x 3 = x 1 4x 1 (x x 3 ) + x 7x 3 = [x 1 4x 1 (x x 3 ) + 4(x x 3 ) 4(x x 3 )] + x 7x 3 = [x 1 x + 4x 3 ] x 16x x 3 3x 3 = [x 1 x + 4x 3 ] (x 8x x 3 ) 3x 3 = [x 1 x + 4x 3 ] [x 8x x x 3 16x 3] 3x 3 = [x 1 x + 4x 3 ] [x 4x 3 ] 16x 3) 3x 3 = [x 1 x + 4x 3 ] [x 4x 3 ] + 3x 3 3x 3 = [x 1 x + 4x 3 ] [x 4x 3 ] + 9x 3 = D 1 l1 + D D 1 l + D 3 D l3, l 1 = x 1 x +4x 3, l = x 4x 3, l 3 = x 3. That is (l 1, l, l 3 ) are linear combinations of (x 1, x, x 3 ). More precisely where l 1 l l 3 = P = is a nonsingular matrix (changing variables). Now turn to general 3 variable quadratic form Q(x 1, x, x 3 ) = (x 1, x, x 3 ) The following three determinants D 1 = a 11, D = are leading principal minors. a 11 a 1 a 1 a x 1 x x 3 a 11 a 1 a 13 a 1 a a 3 a 31 a 3 a 33, D 3 = x 1 x x 3. a 11 a 1 a 13 a 1 a a 3 a 31 a 3 a 33 It is possible to show that, as in variable case, if D 1 = 0, D = 0, then Q(x 1, x, x 3 ) = D 1 l1 + D D 1 l + D 3 D l 3 5

6 where l 1, l, l 3 are some linear combinations of x 1, x, x 3 (this is called Lagrange s reduction). This implies the following criteria: 1. The form is positive definite iff D 1 > 0, D > 0, D 3 > 0, that is all principal minors are positive.. The form is negative definite iff D 1 < 0, D > 0, D 3 < 0, that is principal minors alternate in sign starting with negative one. Example. Determine the definiteness of the form Q(x 1, x, x 3 ) = 3x 1+x + 3x 3 x 1 x x x 3. Solution. The matrix of our form is The leading principal minors are D 1 = 3 > 0, D = thus the form is positive definite > 5, D 3 = Definiteness of n Variable Quadratic Form a a 1n a Let Q(x 1,..., x n ) = (x 1,..., x n ) 1... a x 1 n x a n1... a n nn quadratic form. The following n determinants D 1 = a 11, D = are leading principal minors. a 11 a 1 a 1 a,..., D n = As in previous cases, it is possible to show that = 18 > 0, be an n variable a a 1n a 1... a n a n1... a nn Q(x 1,..., x 3 ) = D 1 l 1 + D D 1 l D n D n 1 l n where (l 1, l,..., l n ) are linear combinations of (x 1, x,..., x n ), more precisely l 1 p p 1n x 1... = l n p n1... p nn x n 6

7 where P = p p 1n... p n1... p nn is a nonsingular matrix (changing variables). Theorem 1. A quadratic form is positive definite if and only if D 1 > 0, D > 0,..., D n > 0, that is all principal minors are positive;. A quadratic form is negative definite if and only if D 1 < 0, D > 0, D 3 < 0, D 4 > 0,..., that is principal minors alternate in sign starting with negative one. 3. If some kth order leading principal minor is nonzero but does not fit either of the above two sign patterns, then the form is indefinite. The situation with semidefiniteness is more complicated, here are involved not only leading principal minors, but all principal minors. Theorem 3 1. A quadratic form is positive semidefinite if and only if all principal minors are 0;. A quadratic form is negative semidefinite if and only if all principal minors of odd degree are 0, and all principal minors of even degree are Definiteness and Eigenvalues As we know a symmetric n n matrix has n real eigenvalues (maybe some multiple). Theorem 4 Given a quadratic form Q(x) = x T Ax and let λ 1,..., λ n be eigenvalues of A. Then Q(x) is positive definite iff λ i > 0, i = 1,..., n; negative definite iff λ i < 0, i = 1,..., n; positive semidefinite iff λ i 0, i = 1,..., n; negative semidefinite iff λ i 0, i = 1,..., n; Proof. Just x T Ax > 0 λ i > 0. Let v be the normalized eigenvector of λ i, that is Av = λ i v. Then 0 < vt Av = λ i vt v = λ i. 7

8 1.4 Linear Constraints and Bordered Matrices Two Variable Case The quadratic form Q(x 1, x ) = x 1 x is indefinite (why?). But if we restrict Q to the subset (subspace) of R determined by the constraint x = 0 we obtain a one variable quadratic form q(x) = Q(x, 0) = x which is definitely positive definite. Thus the restriction of Q on x 1 axis is positive definite. Similarly, the constraint x 1 = 0 gives q(x) = Q(0, x) = x which is negative definite. Thus the restriction of Q on x axis is positive definite. Now let us consider the constraint x 1 x = 0. Solving x 1 from this constraint we obtain x 1 = x. Substituting in Q we obtain one variable quadratic form q(x) = Q(x, x) = 4x x = 3x which is positive definite. Thus the restriction of Q on the line x 1 + x = 0 is positive definite. Let us repeat the last calculations for a general variable quadratic form ( ) ( ) a b Q(x 1, x ) = ax 1 + bx 1 x + cx x1 = (x 1, x ) b c x subject to the linear constraint Ax 1 + Bx = 0. Let us, as above, solve x 1 from the constraint and substitute in Q: x 1 = B A x Q(x 1, x ) = Q( B A x, x ) = a 11 ( B A x ) + a 1 ( B A x )x + a x = = ab bab+ca A x. So the definiteness of Q(x 1, x ) on the constraint set Ax 1 + Bx = 0 depends on the sign of coefficient ab bab+ca, more precisely on the nominator A ab bab + ca, which is nothing else than It is easy to see that det 0 A B A a b B b c 8.

9 This matrix is called bordered matrix of A. Thus we have proved the 0 A B A a b B b c Theorem 5 A two variable quadratic form Q(x 1, x ) = a 11 x 1 + a 1 x 1 x + a x restricted on the constrained set Ax 1 + Bx = 0 is positive (resp. negative) if and only if the determinant of the bordering matrix is negative (resp. positive). det 0 A B A a b B b c 1.4. n Variable m Constraint Case Analogous result holds in general case. Let x 1 x =..., A = x n and a 11.. a 1n... a n1... a nn Q(x 1,..., x n ) = x T Ax be an n variable quadratic form. Consider a constraint set B B 1n x =.... B m1... B mn x n 0 In fact this set is the null space of B, is not it? The bordered matrix for this situation looks as B B 1n B m1... B mn ( 0 B H = = B B m1 a a 1n B T A B 1n... B mn a n1... a nn ). This (m + n) (m + n) matrix has m + n leading principal minors M 1, M,..., M m, M m+1,..., M m 1, M m, M m+1,..., M m+n = H. 9

10 The first m matrices M 1,..., M m are zero matrices. Next m 1 matrices M m+1,..., M m 1 have zero determinant. The determinant of the next minor M m is ±det B where B is the left m m minor of B, it does not carry any information about A. And only the determinants of last n m matrices M m+1,..., M m+n carry information about the matrix A, i.e. about the quadratic form Q. Exactly these minors are essential for constraint definiteness. Theorem 6 (i) If the determinant of H = M m+n has the sign ( 1) n and the signs of determinants of last m + n leading principal minors M m+1,..., M m+n alternate in sign, then Q is negative definite on the constraint set Bx = 0, so x = 0 is a strict global max of Q on the constraint set Bx = 0. (ii) If the determinants of all last m + n leading principal minors M m+1,..., M m+n have the same sign ( 1) m, then Q is positive definite on the constraint set Bx = 0, so x = 0 is a strict global min of Q on the constraint set Bx = 0. (iii) If both conditions (i) and (ii) are violated by some nonzero minors from last m + n leading principal minors M m+1,..., M m+n then Q is indefinite on the constraint set Bx = 0, so x = 0 is neither max nor min of Q on the constraint set Bx = 0. This table describes the above sign patterns: M m+m+1 M m+m+... M m+n 1 M m+n negative ( 1) m+1 ( 1) m+... ( 1) n 1 ( 1) n positive ( 1) m ( 1) m... ( 1) m ( 1) m Example Determine the definiteness of the following constrained quadratics Q(x 1, x, x 3 ) = x 1 + x x 3 + 4x 1 x 3 x 1 x, subject to x 1 + x + x 3 = 0. 10

11 Solution. Here n = 3, m = 1. The bordered matrix here is The leading principal minors are 0 1 M 1 = 0, M = 1 1 = 1, M 3 = = 4 < 0, M 4 = = 16 > in this case n = 3, m = 1, so the essential minors are M 3 and M 4. For negative definiteness the sign pattern must be ( 1) n 1 = ( 1) = +, ( 1) n = ( 1) 3 = And for positive definiteness the sign pattern must be ( 1) m = ( 1) 1 =, ( 1) m = ( 1) 1 =, since we have 4 < 0, 16 > 0, which differs from both patterns, our constrained quadratic is indefinite. 1.5 Change of variables* Let Q = x T Ax be a quadratic form of variable x R n. Let us step to new variable y R n which is connected to x by x = P y where P is some nonsingular matrix. Note that in this case x T = (P y) T = y T P T. Then Q = x T Ax = y T P T AP y = y T (P T AP )y, so the matrix of new quadratic form of variable y is B = P T AP. If A is symmetric, B = P T AP is symmetric too (prove it using the definition of symmetric matrix A = A T ). Jacobi s Theorem states that any symmetric matrix A can be transformed to a diagonal matrix Λ = P T AP by an orthogonal matrix P. The elements of Λ are uniquely determined up to permutation. If we allow P to be a nonsingular matrix, then A can be transformed to a diagonal matrix where each diagonal element is 1, -1 or 0. 11

12 Important remark: Let D 1, D,... D n be the leading principal minors of A, and let Λ be corresponding diagonal matrix with 1,0,-1 on the diagonal, then the Λ-s leading principal minors have the same sign pattern as D i -s. Silvester s Law of inertia states that the number of 0-s n 0, the number of 1-s n + and the number of -1-s n are invariant in the sense that any other diagonalization gives the same n 0, n + and n. The signature of a quadratic form is defined as n + n, so the signature is an invariant of quadratic form too. A quadratic form Q is positive definite if n + = n. A quadratic form Q is negative definite if n = n. A quadratic form Q is positive semi definite if n = 0. A quadratic form Q is negative semi definite if n + = 0. A quadratic form Q is indefinite if n + > 0 and n > Why Quadratic Forms?* Recall Taylor formula f(x 0 + h) = f(x 0 ) + f (x 0 ) h + f (x 0 ) h + f (x 0 ) 3! h and ask a question: is this function increasing at x = x 0? That is if h > 0 is f(x 0 + h) f(x 0 ) = f (x 0 ) h + f (x 0 ) h + f (x 0 ) 3! h positive? Notice, that for small enough h the quadratic term f (x 0 ) h, the cubical term f (x 0 ) h 3 etc are smaller that the linear term f (x 3! 0 ) h, so the positivity of f(x 0 + h) f(x 0 ) depends on positivity of the coefficient f (x 0 ) of that linear form f (x 0 ) h, that is on the positivity of the derivative at x 0. Well, from this easily follows, that a local extremum we can expect at a critical point f (x 0 ) = 0. Now ask a question: how to recognize, is a critical point x 0 min or max? Address again to Taylor formula which now, since f (x 0 ) = 0, looks as f(x 0 + h) = f(x 0 ) + f (x 0 ) Consider the difference f(x 0 + h) f(x 0 ) = f (x 0 ) 1 h + f (x 0 ) 3! h + f (x 0 ) 3! h h

13 The point x 0 is min if this difference f(x 0 +h) f(x 0 ) is positive for all small enough values of h. For small enough h the cubical term f (x 0 ) h 3, the term of degree 4 3! f (4) (x 0 ) h 4 etc are smaller that the quadratic term f (x 0 ) h, so the positivity 4! of the difference f(x 0 + h) f(x 0 ) depends on positivity of the coefficient f (x 0 ) of that quadratic form f (x 0 ) h, that is on the positivity of the second derivative at x 0. So, at h = 0 this form has minimum (it is positive definite) if its coefficient f (x 0 ) is positive (our good old second order condition). Now consider a function of two variables F (x 1, x ). Again there exists Taylor for two variables F (x 1 + h 1, x + h ) = F (x 1, x ) + F x 1 (x 1, x ) h 1 + F x (x 1, x ) h + 1 F (x x 1, x ) h 1 + F 1 x 1 x (x 1, x ) h 1 h + 1 F (x x 1, x ) h + (x 1, x ) h F x 3 1 As in one-variable case, an optimum (min or max) can be expected at a critical point, where the linear form F x 1 (x 1, x ) h 1 + F x (x 1, x ) h vanishes. And the minimality or maximality depends on the quadratic form Q(h 1, h ) = 1 F (x x 1, x ) h 1 + F (x 1, x ) h 1 h + 1 F (x 1 x 1 x x 1, x ) h. Namely, if this form is positive definite, that is if Q(h 1, h ) > 0 for all (h 1, h ) (0, 0), then (x 1, x ) is a point of minimum, and if the form is negative definite, that is Q(h 1, h ) < 0 for all (h 1, h ) (0, 0), then (x 1, x ) is a point of maximum. 13

14 Exercises 1. Write the following quadratic forms in matrix form (a) Q(x 1, x ) = x 1 x 1 x + x. (b) Q(x 1, x ) = 5x 1 10x 1 x + x. (c) Q(x 1, x, x 3 ) = x 1 + x + 3x 3 + 4x 1 x 6x 1 x 3 + 8x x 3.. By direct matrix multiplication express each of the following matrix products as a quadratic form (a) ( ( ) ( ) ) 4 x1 x 1 x, (b) ( u v ) ( ) ( ) 5 u v x 3. Write the quadratic form corresponding to the matrix A and then find a symmetric matrix which determines the same quadratic form for ( ) (a) A =, (b) A = (c) A = Write an example of variable quadratic form Q(x 1, x ) which is (a) positive definite. (b) negative definite. (c) positive semidefinite. (d) negative semidefinite. (e) indefinite. 5. Write an example of 3 variable quadratic form Q(x 1, x, x 3 ) which is (a) positive definite. (b) negative definite. (c) positive semidefinite. (d) negative semidefinite. (e) indefinite. 6. Determine definiteness of the following symmetric matrices (quadratic forms) ( ) ( ) ( ) ( ) (a), (b), (c), (d), (e) 4 5 (f) (g) Formulate criteria for positive and negative definiteness of the quadratic form given by a diagonal matrix 14

15 a a a n a n. 8. Determine the definiteness of the following constrained quadratics (a) Q(x 1, x ) = x 1 + x 1 x x, subject to x 1 + x = 0. (b) Q(x 1, x ) = 4x 1 + x 1 x x, subject to x 1 + x = 0. (c) Q(x 1, x, x 3 ) = x 1 + x x 3 + 4x 1 x 3 x 1 x, subject to x 1 + x + x 3 = 0. (d) Q(x 1, x, x 3 ) = x 1 + x + x 3 + 4x 1 x 3 x 1 x, subject to x 1 + x + x 3 = 0. (e) Q(x 1, x, x 3 ) = x 1 x 3 + 4x 1 x 6x x 3, subject to x 1 + x x 3 = 0. (f) Q(x 1, x, x 3, x 4 ) = x 1 x + x 3 + x 4 + 4x x 3 x 1 x 4, subject to x 1 + x x 3 + x 4 = 0. x 1 9x + x 4 = 0. Homework 5, 6(g), 7, 8(e), 8(f). Essay (optional, please type) 1. Lagrangian reduction. If all leading minors D k of the symmetric matrix of a quadratic form are nonzero then Q(x 1,..., x 3 ) = D 1 l 1 + D D 1 l D n D n 1 l n where l 1,..., l n are some linear combinations of x 1,..., x n We have shown it for n = in general case and for some particular example for n = 3 (see above). (a) Give a general proof for n = 3. (b) Deduce the criterion for definiteness from this reduction.. Linear restriction of a quadratic form. We have actually proved the criterion for definiteness of restricted quadratic form in terms of bordered matrix for n = and m = 1 (see above). Do the same for n = 3 and m = Definiteness of restricted form. Show that if a quadratic form Q(x 1, x ) = a 11 x 1 + a 1 x 1 x + a x is positive (negative) definite then its restriction on a linear constraint set Ax 1 + Bx = 0 is also positive (negative) definite. Hint: Calculate the determinant of bordered matrix. Or just substitute in Q the value x = A B x 1. But if a form is indefinite then the restricted form can be whatever you wish (give examples). 15

16 Q(x 1, x,..., x n ) = Short Summary Quadratic Forms n a ij x i x j = (x 1,..., x n ) i,j a a 1n... a n1... a nn A is symmetric. If not, take its symmetrization A = A+AT. Definiteness of Q(x): (a) positive definite if Q(x) > 0 for all x 0 R n ; (b) positive semidefinite if Q(x) 0 for all x 0 R n ; (c) negative definite if Q(x) < 0 for all x 0 R n ; (d) negative semidefinite if Q(x) 0 for all x 0 R n ; (e) indefinite if Q(x) > 0 for some x and Q(x) < 0 for some other x. Definiteness and Optimality If Q is positive definite then x = 0 is global maximum; If Q is negative definite then x = 0 is global minimum. Leading principal minors D 1 = a 11, D = a 11 a 1 a 1 a x 1.. x n,..., D n = A. A quadratic form Q(x) is: Positive definite iff D 1 > 0, D > 0,..., D n > 0. Negative definite iff D 1 < 0, D > 0, D 3 < 0, D 4 > 0,.... Indefinite iff some nonzero D k violates above sign patterns. Positive semidefinite iff all principal minors M k 0. Negative semidefinite iff all M k+1 0 and M k 0. = x T A x Definiteness of Q(x) = x T A x on the constrained set B x = 0: M m+m+1 M m+m+... M m+n 1 M m+n negative ( 1) m+1 ( 1) m+... ( 1) n 1 ( 1) n positive ( 1) m ( 1) m... ( 1) m ( 1) m where M m+1,..., M m+n are last n m minors of the bordered matrix B B 1n B m1... B mn. B B m1 a a 1n B 1n... B mn a n1... a nn 16

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Econ Slides from Lecture 8

Econ Slides from Lecture 8 Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another. Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2 HW3 - Due 02/06 Each answer must be mathematically justified Don t forget your name Problem 1 Find a 2 2 matrix B such that B 3 = A, where A = 2 2 If A was diagonal, it would be easy: we would just take

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

5 More on Linear Algebra

5 More on Linear Algebra 14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix KAIST wit Lab 2012. 07. 10 남성호 Introduction The signs of the eigenvalues can be important. The signs can also be related to the minima,

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

8 Extremal Values & Higher Derivatives

8 Extremal Values & Higher Derivatives 8 Extremal Values & Higher Derivatives Not covered in 2016\17 81 Higher Partial Derivatives For functions of one variable there is a well-known test for the nature of a critical point given by the sign

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions May 9, 24 MATH 48 MIDTERM EXAM OUTLINE This exam will consist of two parts and each part will have multipart questions. Each of the 6 questions is worth 5 points for a total of points. The two part of

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

MATH529 Fundamentals of Optimization Unconstrained Optimization II

MATH529 Fundamentals of Optimization Unconstrained Optimization II MATH529 Fundamentals of Optimization Unconstrained Optimization II Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 31 Recap 2 / 31 Example Find the local and global minimizers

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

A Field Extension as a Vector Space

A Field Extension as a Vector Space Chapter 8 A Field Extension as a Vector Space In this chapter, we take a closer look at a finite extension from the point of view that is a vector space over. It is clear, for instance, that any is a linear

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and 7.2 - Quadratic Forms quadratic form on is a function defined on whose value at a vector in can be computed by an expression of the form, where is an symmetric matrix. The matrix R n Q R n x R n Q(x) =

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2 Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Linear Algebra Solutions 1

Linear Algebra Solutions 1 Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Mathematical Economics: Lecture 16

Mathematical Economics: Lecture 16 Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave

More information

The minimal polynomial

The minimal polynomial The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic

More information

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Chapter 6 Eigenvalues Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Closed Leontief Model In a closed Leontief input-output-model consumption and production coincide, i.e. V x = x

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

1 Convexity, concavity and quasi-concavity. (SB )

1 Convexity, concavity and quasi-concavity. (SB ) UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of

More information

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Math 164-1: Optimization Instructor: Alpár R. Mészáros Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints

More information

Fundamentals of Unconstrained Optimization

Fundamentals of Unconstrained Optimization dalmau@cimat.mx Centro de Investigación en Matemáticas CIMAT A.C. Mexico Enero 2016 Outline Introduction 1 Introduction 2 3 4 Optimization Problem min f (x) x Ω where f (x) is a real-valued function The

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros

Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros Midterm, October 6, 016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0

More information

(Received: )

(Received: ) The Mathematics Student, Special Centenary Volume (2007), 123-130 SYLVESTER S MINORANT CRITERION, LAGRANGE-BELTRAMI IDENTITY, AND NONNEGATIVE DEFINITENESS SUDHIR R. GHORPADE AND BALMOHAN V. LIMAYE (Received:

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

Matrix Algebra, part 2

Matrix Algebra, part 2 Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

March 5, 2012 MATH 408 FINAL EXAM SAMPLE March 5, 202 MATH 408 FINAL EXAM SAMPLE Partial Solutions to Sample Questions (in progress) See the sample questions for the midterm exam, but also consider the following questions. Obviously, a final

More information

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information