Max-Min Problems in R n Matrix

Size: px
Start display at page:

Download "Max-Min Problems in R n Matrix"

Transcription

1 Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R n. The method we describe is the higher-dimensional analogue to nding critical points and applying the second derivative test to functions on R studied in rst-semester calculus. Taylor s Theorem in R n Let f C (R n ), where C (R n ) is the set of real-valued functions de ned on R n having continuous second partial derivatives. The method for solving for local extreme points of f relies upon Taylor s Theorem with second degree remainder terms, which we state here without proof. (n the following theorem, an open hypersphere centered at x 0 is a set of the form fx R n j kx x 0 k < rg for some positive real number r.) TEOREM 1 (Taylor s Theorem in R n ) Let A be an open hypersphere centered at x 0 R n, let u be a unit vector in R n, and let t R such that x 0 + tu A. Suppose f : A! R has continuous second partial derivatives throughout A; that is, f C (A). Then there is a c with 0 c t such that f(x 0 + tu) = f(x 0 ) i=1 i (tu i ) + 1 i=1 i nx nx + (t u i u j j i=1 j=i+1 +cu +cu (t u i ) Taylor s Theorem in R n is derived from the familiar Taylor s Theorem in R by applying it to the function g(t) = f(x 0 + tu). n R, the formula in Taylor s Theorem is f(x 0 + tu) = f(x 0 (tu 1 (tu ) + (t u 1) + 1 (t u ) +cu f (t u 1 u ): +cu h Recall that the gradient of f is de ned by ; : : n i. f we let v = tu, then, in R, v = [v 1 ; v ] = [tu 1 ; tu ], and so (tu 1 ) (tu ) simpli es to rf v. Also, since f has continuous second Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

2 derivatives, we f f : Therefore, (t u 1) + 1 = v f v 1 f = 1 (t u ) f (t u 1 u ) + 1 v 5 v; where v is considered to be a column vector. The matrix v 1 v in this expression is called the essian matrix for f. Thus, in the R case, with v = tu, the formula in Taylor s Theorem can be written as f(x 0 + v) = f(x 0 ) + rf v + 1 vt v, 5 +kv for some k with 0 k 1 (where k = c t ). While we have derived this result in R, the same formula holds in R n, where the essian is the matrix whose (i; j) entry i@x j. Critical Points f A is a subset of R n, then we say that f : A! R has a local maximum at a point x 0 A if and only if there is an open neighborhood U of x 0 such that f(x 0 ) f(x) for all x U. A local minimum for a function f is de ned analogously. TEOREM Let A be an open hypersphere centered at x 0 R n, and let f : A! R have continuous rst partial derivatives on A. f f has a local maximum or a local minimum at x 0, then rf(x 0 ) = 0. Proof f x 0 is a local maximum, then f(x 0 + he i ) f(x 0 ) 0 for small h. Then, f(+hei) f() f(x lim h!0 + h 0. Similarly, lim 0+he i) f(x 0) h!0 h 0. ence, for the limit to exist, we i = 0. Since this is true for each i, rf = 0. A similar proof works for local minimums. QED Points x 0 at which rf(x 0 ) = 0 are called critical points. Example 1 Let f : R! R be given by f(x; y) = 7x + 6xy + x + 7y y +. Then rf = [14x + 6y + ; 6x + 14y ]. We nd critical points for f by solving rf = 0. This is the linear system 14x + 6y + = 0 6x + 14y = 0 ; which has the unique solution x 0 = [ 1; ]. ence, by Theorem, ( 1; ) is the only possible extreme point for f. (We will see later that ( 1; ) is a local minimum.) Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

3 Su cient Conditions for Local Extreme Points f x 0 is a critical point for a function f, how can we determine whether x 0 is a local maximum or a local minimum? For functions on R, we have the second derivative test from calculus, which says that if f 00 (x 0 ) < 0, then x 0 is a local maximum, but if f 00 (x 0 ) > 0, then x 0 is a local minimum. We now derive a similar test in R n. Consider the following formula from Taylor s Theorem: f(x 0 + v) = f(x 0 ) + rf(x 0 ) v + 1 vt v. +kv At a critical point, rf(x 0 ) = 0, and so f(x 0 + v) = f(x 0 ) + 1 vt +kv v. ence, if v T v is positive for all small nonzero vectors v, then f will +kv have a local minimum at x 0. (Similarly, if v T v is negative, f will +kv have a local maximum.) But since we assume that f has continuous second partial derivatives, v T v is continuous in v and k, and will be positive for small +kv v if v T v is positive for all nonzero v. ence, TEOREM Given the conditions of Taylor s Theorem for a set A and a function f : A! R, f has a local minimum at a critical point x 0 if v T v > 0 for all nonzero vectors v. Similarly, f has a local maximum at a critical point x 0 if v T v < 0 for all nonzero vectors v. Positive De nite Quadratic Forms f v is a vector in R n, and A is an n n matrix, the expression v T Av is known as a quadratic form. (For more details on the general theory of quadratic forms, see Section 8.11.) A quadratic form such that v T Av > 0 for all nonzero vectors v is said to be positive de nite. Similarly, a quadratic form such that v T Av < 0 for all nonzero vectors v is said to be negative de nite. Now, in particular, the expression v T v in Theorem is a quadratic form. Theorem then says that if v T v is a positive de nite quadratic form at a critical point x 0, then f has a local minimum at x 0. Theorem also says that if v T v is a negative de nite quadratic form at a critical i@x j j@x i x 0, then f has a local maximum at x 0. Therefore, we need a method to determine whether a quadratic form of this type is positive de nite or negative de nite. Now, the essian matrix, which we will abbreviate as, is sym- metric because (since f C (A)). ence, by Theorem 6.0, can be orthogonally diagonalized. That is, there is an orthogonal matrix P such that PP T = D, a diagonal matrix, and so, = P T DP. ence, v T v = v T P T DPv = (Pv) T D(Pv). Letting w = Pv, we get v T v = w T Dw. But P is nonsingular, so as v ranges over all of R n, so does w, and vice-versa. Thus, Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

4 4 v T v > 0 for all nonzero v if and only if w T Dw > 0 for all nonzero w. Now, D is diagonal, and so w T Dw = d 11 w 1 + d w + + d nn w n. But the d ii s are the eigenvalues of. Thus, it follows that w T Dw > 0 for all nonzero w if and only if all of these eigenvalues are positive. (Set w = e i for each i to prove the only if part of this statement.) Similarly, w T Dw < 0 for all nonzero w if and only if all of these eigenvalues are negative. ence, TEOREM 4 A symmetric matrix A de nes a positive de nite quadratic form v T Av if and only if all of the eigenvalues of A are positive. A symmetric matrix A de nes a negative de nite quadratic form v T Av if and only if all of the eigenvalues of A are negative. ence, Theorem can be restated as follows: Given the conditions of Taylor s Theorem for a set A and a function f : A! R: (1) if all of the eigenvalues of are positive at a critical point x 0, then f has a local minimum at x 0, and () if all of the eigenvalues of are positive at a critical point x 0, then f has a local minimum at x 0. Example Consider the function f(x; y) = 7x + 6xy + x + 7y y + : n Example 1, we found that f has a critical point at x 0 = [ 1; ]. Now, the essian matrix for f at x 0 is 5 x 0 = But p (x) = x 8x + 160, which has roots x = 8 and x = 0. Thus, has all eigenvalues positive, and hence, v T v is positive de nite. Theorem 4 then tells us that x 0 = [ 1; ] is a local minimum for f. Local Maxima and Minima in R t can be shown (see Exercise ) that a symmetric matrix A de nes a positive de nite quadratic form (v T Av > 0 for all nonzero v) if and only if a 11 > 0 and jaj > 0. Similarly, a symmetric matrix de nes a negative de nite quadratic form (v T Av < 0 for all nonzero v) if and only if a 11 < 0 and jaj > 0. Example Suppose f(x; y) = x x y + y + 4y x 4 y 4. First, we look for critical points by solving the system 8 = 4x 4xy 4x = 4x(1 (y + x )) = 4x y + 4y + 4 4y = 4y(x + y ) + 4y + 4 = = 0 yields x = 0 or y +x = 1. f x = = 0 gives 4y+4 4y = 0. The unique real solution to this equation is y =. Thus, [0; ] is a critical point. f x 6= 0, then y + x = = 0, we have 0 = 4y(1) + 4y + 4 = 4, a contradiction, so there is no critical point when x 6= 0. : Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

5 5 An Example in R Next, we compute the essian matrix at the critical point [0; = 4 5 [0;] 4 4y 1x 8xy 8xy 4x + 4 1y [0;] = 1 0 : 0 44 Since the (1; 1) entry is negative and jj > 0, de nes a negative de nite quadratic form and so f has a local maximum at [0; ]. Example 4 Consider the function g(x; y; z) = 5x + xz + 4xy + 10x + z 6yz 6z + 5y + 1y + 1. We nd the critical points by solving the system 8 = 10x + z + 4y + 10 = 0 = 4x 6z + 10y + 1 = 0 = x + 6z 6y 6 = 0 Using row reduction to solve this linear system yields the unique critical point [ 9; 1; 16]. The essian matrix at [ 9; 1; 16] g g 7 = g g [ 9;1;16] A lengthy computation produces p (x) = x 6x +164x 8. The roots of p (x) are approximately 0:04916, 10:6011, and 15:497. Since all of these eigenvalues for are positive, [ 9; 1; 16] is a local minimum for g.. Failure of the essian Matrix Test n calculus, we discovered that the second derivative test fails when the second derivative is zero at a critical point. A similar situation is true in R n. f the essian matrix at a critical point has 0 as an eigenvalue, and all other eigenvalues have the same sign, then the function f could have a local maximum, a local minimum, or neither at this critical point. Of course, if the essian matrix at a critical point has two eigenvalues with opposite signs, the critical point is not a local extreme point (why?). Exercise illustrates these concepts. New Vocabulary C (R n ) (functions from R n to R having continuous second partial derivatives) critical point (of a function) gradient (of a function on R n ) essian matrix local maximum (of a function on R n ) local minimum (of a function on R n ) negative de nite quadratic form Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

6 6 ighlights open hypersphere (in R n ) positive de nite quadratic form Taylor s Theorem (in R n ) The gradient of a function f : R n! R is de ned by rf = ; : : n : Let A be an open hypersphere about x 0, and let f be a function on A with continuous partial derivatives. f f has a local maximum or minimum at x 0, then rf(x 0 ) = 0: For a function f : R n! R; its corresponding essian matrix is the n matrix whose (i; j) entry is i@x j : n particular, for a function f : R! R; the essian matrix 5 : Taylor s Theorem in R n : Let A be an open hypersphere centered at x 0 R n, let u be a unit vector in R n, and let t R such that x 0 + tu A. Suppose f : A! R has continuous second partial derivatives throughout A; that is, f C (A). Then there is a c with 0 c t such that f(x 0 +tu) = f(x 0 )+ P i (tu i )+ 1 P n (t u i )+P n P n i=1 i +cu i@x j +cu (t u i u j ): n particular, we have f(x 0 +v) = f(x 0 )+ rf v + 1 vt v, for some k with 0 k 1. +kv Let A be an open hypersphere centered at x 0 R n : f f : A! R has continuous second partial derivatives throughout A, then f : A! R, f has a local minimum at a critical point x 0 if v T v > 0 for all nonzero vectors v. Similarly, f has a local maximum at a critical point x 0 if v T v < 0 for all nonzero vectors v. A quadratic form is an expression of the form v T Av, where v is a vector in R n, and A is an n n matrix. A positive de nite quadratic form is one such that v T Av > 0 for all nonzero vectors v. Similarly, a negative de nite quadratic form is one such that v T Av < 0 for all nonzero vectors v. For a function f : R n! R having essian matrix, if v T v is a positive de nite quadratic form at a critical point x 0, then f has a local minimum at x 0. Similarly, if v T v is a negative de nite quadratic form at a critical point x 0, then f has a local maximum at x 0. A symmetric matrix A de nes a positive de nite quadratic form v T Av if and only if all of the eigenvalues of A are positive. A symmetric matrix A de nes a negative de nite quadratic form v T Av if and only if all of the eigenvalues of A are negative. f f : R n! R has essian matrix ; and all eigenvalues of are positive at a critical point x 0, then f has a local minimum at x 0. f f : R n! R has essian matrix ; and all eigenvalues of are negative at a critical point x 0, then f has a local maximum at x 0. Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

7 A symmetric matrix A has a positive de nite quadratic form v T Av if and only if a 11 > 0 and jaj > 0. Similarly, a symmetric matrix has a negative de nite quadratic form v T Av if and only if a 11 < 0 and jaj > 0. 7 EXERCSES 1. n each part, solve for all critical points for the given function. Then, for each critical point, use the essian matrix to determine whether the critical point is a local maximum, a local minimum, or neither. F a) f(x; y) = x + x + xy x + y b) f(x; y) = 6x + 4xy + y + 8x 9y F c) f(x; y) = x + xy + x + y y + 5 d) f(x; y) = x + x y x + xy + xy x + y y y (int: To solve for critical points, = 0.) F e) f(x; y; z) = x +xy +xz +y 4 +4y z +6y z y +4yz 4yz +z 4 z a) Show that f(x; y) = (x ) 4 + (y ) has a local minimum at [; ], but its essian matrix at [; ] has 0 as an eigenvalue. b) Show that f(x; y) = (x ) 4 + (y ) has a critical point at [; ], its essian matrix at [; ] has all nonnegative eigenvalues, but [; ] is not a local extreme point for f. c) Show that f(x; y) = (x+1) 4 (y+) 4 has a local maximum at [ 1; ], but its essian matrix at [ 1; ] is O and thus has all of its eigenvalues equal to zero. d) Show that f(x; y; z) = (x 1) (y ) +(z ) 4 does not have any local extreme points. Then verify that its essian matrix has eigenvalues of opposite sign at the function s only critical point. a b a) Prove that a symmetric matrix A = de nes a positive b c de nite quadratic form if and only if a > 0 and jaj > 0. (int: Compute p A (x) and show that both roots are positive if and only if a > 0 and jaj > 0.) b) Prove that a symmetric matrix A de nes a negative de nite quadratic form if and only if a 11 < 0 and jaj > 0. F. True or False: a) f f : R n! R has continuous second partial derivatives, then the essian matrix is symmetric. b) Every symmetric matrix A de nes either a positive de nite or a negative de nite quadratic form. c) A essian matrix for a function with continuous second partial derivatives evaluated at any point is diagonalizable. 5 d) v T v is a positive de nite quadratic form. 0 0 e) v T v is a positive de nite quadratic form Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

8 8 Answers to Selected Exercises (1) (a) Critical points: (1; 1), ( 1; 1); local minimum at (1; 1) (c) Critical point: ( ; ); local minimum at ( ; ) (e) Critical points: (0; 0; 0); ( 1 ; 1 ( 1 ; 1 ; 1 ); ( 1 ; 1 ; 1 ) (4) (a) T (b) F (c) T (d) T (e) F ; 1 ); ( 1 ; 1 ; 1 ); local minimums at Andrilli/ecker Elementary Linear Algebra, 4th ed. March 15, 010 Copyright 010, Elsevier nc. All rights reserved.

Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker

Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker Student Solutions Manual for Web Sections Elementary Linear Algebra th Edition Stephen Andrilli David Hecker Copyright, Elsevier Inc. All rights reserved. Table of Contents Lines and Planes and the Cross

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Calculus Example Exam Solutions

Calculus Example Exam Solutions Calculus Example Exam Solutions. Limits (8 points, 6 each) Evaluate the following limits: p x 2 (a) lim x 4 We compute as follows: lim p x 2 x 4 p p x 2 x +2 x 4 p x +2 x 4 (x 4)( p x + 2) p x +2 = p 4+2

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

MA102: Multivariable Calculus

MA102: Multivariable Calculus MA102: Multivariable Calculus Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable

More information

5 More on Linear Algebra

5 More on Linear Algebra 14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

Linear Algebra. Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems.

Linear Algebra. Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems. Linear Algebra Chapter 8: Eigenvalues: Further Applications and Computations Section 8.2. Applications to Geometry Proofs of Theorems May 1, 2018 () Linear Algebra May 1, 2018 1 / 8 Table of contents 1

More information

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima Calculus 50A - Advanced Calculus I Fall 014 14.7: Local minima and maxima Martin Frankland November 17, 014 In these notes, we discuss the problem of finding the local minima and maxima of a function.

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Math 5BI: Problem Set 6 Gradient dynamical systems

Math 5BI: Problem Set 6 Gradient dynamical systems Math 5BI: Problem Set 6 Gradient dynamical systems April 25, 2007 Recall that if f(x) = f(x 1, x 2,..., x n ) is a smooth function of n variables, the gradient of f is the vector field f(x) = ( f)(x 1,

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

NEW SIGNS OF ISOSCELES TRIANGLES

NEW SIGNS OF ISOSCELES TRIANGLES INTERNATIONAL JOURNAL OF GEOMETRY Vol. 2 (2013), No. 2, 56-67 NEW SIGNS OF ISOSCELES TRIANGLES MAKSIM VASKOUSKI and KANSTANTSIN KASTSEVICH Abstract. In this paper we prove new signs of isosceles triangles

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review B553 Lecture 3: Multivariate Calculus and Linear Algebra Review Kris Hauser December 30, 2011 We now move from the univariate setting to the multivariate setting, where we will spend the rest of the class.

More information

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2 HW3 - Due 02/06 Each answer must be mathematically justified Don t forget your name Problem 1 Find a 2 2 matrix B such that B 3 = A, where A = 2 2 If A was diagonal, it would be easy: we would just take

More information

Lecture 16: 9.2 Geometry of Linear Operators

Lecture 16: 9.2 Geometry of Linear Operators Lecture 16: 9.2 Geometry of Linear Operators Wei-Ta Chu 2008/11/19 Theorem 9.2.1 If T: R 2 R 2 is multiplication by an invertible matrix A, then the geometric effect of T is the same as an appropriate

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Chapter 2. Vectors and Vector Spaces

Chapter 2. Vectors and Vector Spaces 2.1. Operations on Vectors 1 Chapter 2. Vectors and Vector Spaces Section 2.1. Operations on Vectors Note. In this section, we define several arithmetic operations on vectors (especially, vector addition

More information

Math 203A - Solution Set 1

Math 203A - Solution Set 1 Math 203A - Solution Set 1 Problem 1. Show that the Zariski topology on A 2 is not the product of the Zariski topologies on A 1 A 1. Answer: Clearly, the diagonal Z = {(x, y) : x y = 0} A 2 is closed in

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Math 203A - Solution Set 1

Math 203A - Solution Set 1 Math 203A - Solution Set 1 Problem 1. Show that the Zariski topology on A 2 is not the product of the Zariski topologies on A 1 A 1. Answer: Clearly, the diagonal Z = {(x, y) : x y = 0} A 2 is closed in

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

MA202 Calculus III Fall, 2009 Laboratory Exploration 2: Gradients, Directional Derivatives, Stationary Points

MA202 Calculus III Fall, 2009 Laboratory Exploration 2: Gradients, Directional Derivatives, Stationary Points MA0 Calculus III Fall, 009 Laboratory Exploration : Gradients, Directional Derivatives, Stationary Points Introduction: This lab deals with several topics from Chapter. Some of the problems should be done

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

1 The Well Ordering Principle, Induction, and Equivalence Relations

1 The Well Ordering Principle, Induction, and Equivalence Relations 1 The Well Ordering Principle, Induction, and Equivalence Relations The set of natural numbers is the set N = f1; 2; 3; : : :g. (Some authors also include the number 0 in the natural numbers, but number

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

1 Lecture 25: Extreme values

1 Lecture 25: Extreme values 1 Lecture 25: Extreme values 1.1 Outline Absolute maximum and minimum. Existence on closed, bounded intervals. Local extrema, critical points, Fermat s theorem Extreme values on a closed interval Rolle

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Problem 1A. Use residues to compute. dx x

Problem 1A. Use residues to compute. dx x Problem 1A. A non-empty metric space X is said to be connected if it is not the union of two non-empty disjoint open subsets, and is said to be path-connected if for every two points a, b there is a continuous

More information

Rings, Integral Domains, and Fields

Rings, Integral Domains, and Fields Rings, Integral Domains, and Fields S. F. Ellermeyer September 26, 2006 Suppose that A is a set of objects endowed with two binary operations called addition (and denoted by + ) and multiplication (denoted

More information

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x?

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x? Inverse Functions Question: What is the relationship between f(x) = x 3 and g(x) = 3 x? Question: What is the relationship between f(x) = x 2 and g(x) = x? Definition (One-to-One Function) A function f

More information

Nonlinear equations and optimization

Nonlinear equations and optimization Notes for 2017-03-29 Nonlinear equations and optimization For the next month or so, we will be discussing methods for solving nonlinear systems of equations and multivariate optimization problems. We will

More information

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}

6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S} 6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains

More information

Section 21. The Metric Topology (Continued)

Section 21. The Metric Topology (Continued) 21. The Metric Topology (cont.) 1 Section 21. The Metric Topology (Continued) Note. In this section we give a number of results for metric spaces which are familar from calculus and real analysis. We also

More information

Lecture 7: Introduction to linear systems

Lecture 7: Introduction to linear systems Lecture 7: Introduction to linear systems Two pictures of linear systems Consider the following system of linear algebraic equations { x 2y =, 2x+y = 7. (.) Note that it is a linear system with two unknowns

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors CHAPTER Eigenvalues and Eigenvectors CHAPTER CONTENTS. Eigenvalues and Eigenvectors 9. Diagonalization. Complex Vector Spaces.4 Differential Equations 6. Dynamical Systems and Markov Chains INTRODUCTION

More information

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1 Exercise. Given the system of equations 8 < : x + y + z x + y + z k x + k y + z k where k R. a) Study the system in terms of the values of k. b) Find the solutions when k, using the reduced row echelon

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Bob Brown Math 251 Calculus 1 Chapter 4, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 4, Section 1 Completed 1 CCBC Dundalk Bob Brown Math 251 Calculus 1 Chapter 4, Section 1 Completed 1 Absolute (or Global) Minima and Maxima Def.: Let x = c be a number in the domain of a function f. f has an absolute (or, global ) minimum

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR Appendix B The Derivative B.1 The Derivative of f In this chapter, we give a short summary of the derivative. Specifically, we want to compare/contrast how the derivative appears for functions whose domain

More information

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. It follows that S is linearly dependent since the equation is satisfied by which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. is

More information

1 The space of linear transformations from R n to R m :

1 The space of linear transformations from R n to R m : Math 540 Spring 20 Notes #4 Higher deriaties, Taylor s theorem The space of linear transformations from R n to R m We hae discssed linear transformations mapping R n to R m We can add sch linear transformations

More information

Performance Surfaces and Optimum Points

Performance Surfaces and Optimum Points CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance

More information

g(t) = f(x 1 (t),..., x n (t)).

g(t) = f(x 1 (t),..., x n (t)). Reading: [Simon] p. 313-333, 833-836. 0.1 The Chain Rule Partial derivatives describe how a function changes in directions parallel to the coordinate axes. Now we shall demonstrate how the partial derivatives

More information

Lecture 12: Diagonalization

Lecture 12: Diagonalization Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

3.2 Gaussian Elimination (and triangular matrices)

3.2 Gaussian Elimination (and triangular matrices) (1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving

More information

X (z) = n= 1. Ã! X (z) x [n] X (z) = Z fx [n]g x [n] = Z 1 fx (z)g. r n x [n] ª e jnω

X (z) = n= 1. Ã! X (z) x [n] X (z) = Z fx [n]g x [n] = Z 1 fx (z)g. r n x [n] ª e jnω 3 The z-transform ² Two advantages with the z-transform:. The z-transform is a generalization of the Fourier transform for discrete-time signals; which encompasses a broader class of sequences. The z-transform

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

Metric Spaces. DEF. If (X; d) is a metric space and E is a nonempty subset, then (E; d) is also a metric space, called a subspace of X:

Metric Spaces. DEF. If (X; d) is a metric space and E is a nonempty subset, then (E; d) is also a metric space, called a subspace of X: Metric Spaces DEF. A metric space X or (X; d) is a nonempty set X together with a function d : X X! [0; 1) such that for all x; y; and z in X : 1. d (x; y) 0 with equality i x = y 2. d (x; y) = d (y; x)

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Coordinate Systems. S. F. Ellermeyer. July 10, 2009

Coordinate Systems. S. F. Ellermeyer. July 10, 2009 Coordinate Systems S F Ellermeyer July 10, 009 These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (rd edition) These notes are

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

Section III.6. Factorization in Polynomial Rings

Section III.6. Factorization in Polynomial Rings III.6. Factorization in Polynomial Rings 1 Section III.6. Factorization in Polynomial Rings Note. We push several of the results in Section III.3 (such as divisibility, irreducibility, and unique factorization)

More information

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e., Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

A function is actually a simple concept; if it were not, history would have replaced it with a simpler one by now! Here is the definition:

A function is actually a simple concept; if it were not, history would have replaced it with a simpler one by now! Here is the definition: 1.2 Functions and Their Properties A function is actually a simple concept; if it were not, history would have replaced it with a simpler one by now! Here is the definition: Definition: Function, Domain,

More information

Topology Exercise Sheet 2 Prof. Dr. Alessandro Sisto Due to March 7

Topology Exercise Sheet 2 Prof. Dr. Alessandro Sisto Due to March 7 Topology Exercise Sheet 2 Prof. Dr. Alessandro Sisto Due to March 7 Question 1: The goal of this exercise is to give some equivalent characterizations for the interior of a set. Let X be a topological

More information

Differentiable Functions

Differentiable Functions Differentiable Functions Let S R n be open and let f : R n R. We recall that, for x o = (x o 1, x o,, x o n S the partial derivative of f at the point x o with respect to the component x j is defined as

More information

Mathematics 206 Solutions for HWK 13b Section 5.2

Mathematics 206 Solutions for HWK 13b Section 5.2 Mathematics 206 Solutions for HWK 13b Section 5.2 Section Problem 7ac. Which of the following are linear combinations of u = (0, 2,2) and v = (1, 3, 1)? (a) (2, 2,2) (c) (0,4, 5) Solution. Solution by

More information

Abstract Algebra I. Randall R. Holmes Auburn University. Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016

Abstract Algebra I. Randall R. Holmes Auburn University. Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016 Abstract Algebra I Randall R. Holmes Auburn University Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016 This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Higher order derivative

Higher order derivative 2 Î 3 á Higher order derivative Î 1 Å Iterated partial derivative Iterated partial derivative Suppose f has f/ x, f/ y and ( f ) = 2 f x x x 2 ( f ) = 2 f x y x y ( f ) = 2 f y x y x ( f ) = 2 f y y y

More information

Properties of the Gradient

Properties of the Gradient Properties of the Gradient Gradients and Level Curves In this section, we use the gradient and the chain rule to investigate horizontal and vertical slices of a surface of the form z = g (x; y) : To begin

More information

2x (x 2 + y 2 + 1) 2 2y. (x 2 + y 2 + 1) 4. 4xy. (1, 1)(x 1) + (1, 1)(y + 1) (1, 1)(x 1)(y + 1) 81 x y y + 7.

2x (x 2 + y 2 + 1) 2 2y. (x 2 + y 2 + 1) 4. 4xy. (1, 1)(x 1) + (1, 1)(y + 1) (1, 1)(x 1)(y + 1) 81 x y y + 7. Homework 8 Solutions, November 007. (1 We calculate some derivatives: f x = f y = x (x + y + 1 y (x + y + 1 x = (x + y + 1 4x (x + y + 1 4 y = (x + y + 1 4y (x + y + 1 4 x y = 4xy (x + y + 1 4 Substituting

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

Lecture Notes on Metric Spaces

Lecture Notes on Metric Spaces Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],

More information

Math 263 Assignment #4 Solutions. 0 = f 1 (x,y,z) = 2x 1 0 = f 2 (x,y,z) = z 2 0 = f 3 (x,y,z) = y 1

Math 263 Assignment #4 Solutions. 0 = f 1 (x,y,z) = 2x 1 0 = f 2 (x,y,z) = z 2 0 = f 3 (x,y,z) = y 1 Math 263 Assignment #4 Solutions 1. Find and classify the critical points of each of the following functions: (a) f(x,y,z) = x 2 + yz x 2y z + 7 (c) f(x,y) = e x2 y 2 (1 e x2 ) (b) f(x,y) = (x + y) 3 (x

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

Assignment 3. Section 10.3: 6, 7ab, 8, 9, : 2, 3

Assignment 3. Section 10.3: 6, 7ab, 8, 9, : 2, 3 Andrew van Herick Math 710 Dr. Alex Schuster Sept. 21, 2005 Assignment 3 Section 10.3: 6, 7ab, 8, 9, 10 10.4: 2, 3 10.3.6. Prove (3) : Let E X: Then x =2 E if and only if B r (x) \ E c 6= ; for all all

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information