MA102: Multivariable Calculus

Similar documents
SIMPLE MULTIVARIATE OPTIMIZATION

Higher order derivative

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Module Two: Differential Calculus(continued) synopsis of results and problems (student copy)

Maxima and Minima. (a, b) of R if

Lecture 8: Maxima and Minima

REVIEW OF DIFFERENTIAL CALCULUS

3 Applications of partial differentiation

Calculus - II Multivariable Calculus. M.Thamban Nair. Department of Mathematics Indian Institute of Technology Madras

Differentiable Functions

Engg. Math. I. Unit-I. Differential Calculus

Chapter 2: Unconstrained Extrema

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

Lecture 4: Partial and Directional derivatives, Differentiability

Exercises for Multivariable Differential Calculus XM521

Math Maximum and Minimum Values, I

Multivariable Calculus

Differentiation. f(x + h) f(x) Lh = L.

Partial Derivatives. w = f(x, y, z).

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

2x (x 2 + y 2 + 1) 2 2y. (x 2 + y 2 + 1) 4. 4xy. (1, 1)(x 1) + (1, 1)(y + 1) (1, 1)(x 1)(y + 1) 81 x y y + 7.

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

MATH 19520/51 Class 5

Taylor Series and stationary points

Economics 204 Summer/Fall 2011 Lecture 11 Monday August 8, f(x + h) (f(x) + ah) = a = 0 = 0. f(x + h) (f(x) + T x (h))

Test Codes : MIA (Objective Type) and MIB (Short Answer Type) 2007

1 Directional Derivatives and Differentiability

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

MATH 31BH Homework 5 Solutions

7a3 2. (c) πa 3 (d) πa 3 (e) πa3

Chapter 8: Taylor s theorem and L Hospital s rule

Math 212-Lecture 8. The chain rule with one independent variable

Preliminary draft only: please check for final version

Math 234 Final Exam (with answers) Spring 2017

Lecture notes: Introduction to Partial Differential Equations

Chain Rule. MATH 311, Calculus III. J. Robert Buchanan. Spring Department of Mathematics

MATH Max-min Theory Fall 2016

M311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3.

Lecture 13 - Wednesday April 29th

ECM Calculus and Geometry. Revision Notes

Calculus III: Practice Final

Chapter 4. Several-variable calculus. 4.1 Derivatives of Functions of Several Variables Functions of Several Variables

g(t) = f(x 1 (t),..., x n (t)).

Derivatives and Integrals

Static Problem Set 2 Solutions

Course Summary Math 211

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Math 207 Honors Calculus III Final Exam Solutions

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

Classnotes - MA1101. Functions of Several Variables

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

3.5 Quadratic Approximation and Convexity/Concavity

DEPARTMENT OF MATHEMATICS AND STATISTICS UNIVERSITY OF MASSACHUSETTS. MATH 233 SOME SOLUTIONS TO EXAM 2 Fall 2018

Functions. A function is a rule that gives exactly one output number to each input number.

ISE I Brief Lecture Notes

a x a y = a x+y a x a = y ax y (a x ) r = a rx and log a (xy) = log a (x) + log a (y) log a ( x y ) = log a(x) log a (y) log a (x r ) = r log a (x).

Economics 205 Exercises

Mathematical Economics: Lecture 9

0, otherwise. Find each of the following limits, or explain that the limit does not exist.

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

LESSON 23: EXTREMA OF FUNCTIONS OF 2 VARIABLES OCTOBER 25, 2017

FINAL REVIEW Answers and hints Math 311 Fall 2017

CALCULUS JIA-MING (FRANK) LIOU

Lecture notes for Math Multivariable Calculus

f(x, y) = 1 2 x y2 xy 3

4 Partial Differentiation

Final Exam Solutions June 10, 2004

Student Study Session. Theorems

McGill University December Intermediate Calculus. Tuesday December 17, 2014 Time: 14:00-17:00

MTH Calculus with Analytic Geom I TEST 1

Optimization and Calculus

Math 4263 Homework Set 1

Convex Optimization. Chapter Introduction Mathematical Optimization

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method

MTAEA Differentiation

Institute of Computer Science

Basics of Calculus and Algebra

Math 117: Honours Calculus I Fall, 2002 List of Theorems. a n k b k. k. Theorem 2.1 (Convergent Bounded) A convergent sequence is bounded.

ENGI Partial Differentiation Page y f x

VANDERBILT UNIVERSITY. MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions

4.1 Analysis of functions I: Increase, decrease and concavity

Tangent spaces, normals and extrema

MAT 211 Final Exam. Spring Jennings. Show your work!

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version

Announcements. Topics: Homework: - sections , 6.1 (extreme values) * Read these sections and study solved examples in your textbook!

Math 180, Final Exam, Fall 2007 Problem 1 Solution

II. An Application of Derivatives: Optimization

PRACTICE PROBLEMS FOR MIDTERM I

Faculty of Engineering, Mathematics and Science School of Mathematics

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

Study Guide/Practice Exam 3

Max-Min Problems in R n Matrix

Variational Methods & Optimal Control

Mathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics

14 Increasing and decreasing functions

Multivariable Calculus and Matrix Algebra-Summer 2017

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

Math 273a: Optimization Basic concepts

Transcription:

MA102: Multivariable Calculus Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati

Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable at X 0 U if there exists a linear map L : R n R m such that f (X 0 + H) f (X 0 ) L(H) lim H 0 H = 0.

Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable at X 0 U if there exists a linear map L : R n R m such that f (X 0 + H) f (X 0 ) L(H) lim H 0 H = 0. The linear map L is called the derivative of f at X 0 and is denoted by Df (X 0 ), that is, L = Df (X 0 ). Other notations: f (X 0 ), df dx (X 0).

Characterization of differentiability Theorem: Consider f : R n R m with f (X) = (f 1 (X),..., f m (X)), where f i : R n R. Then f is differentiable at X 0 R n f i is differentiable at X 0 for i = 1, 2,..., m. Further Df (X 0 )(H) = ( f 1 (X 0 ) H,..., f m (X 0 ) H).

Characterization of differentiability Theorem: Consider f : R n R m with f (X) = (f 1 (X),..., f m (X)), where f i : R n R. Then f is differentiable at X 0 R n f i is differentiable at X 0 for i = 1, 2,..., m. Further Df (X 0 )(H) = ( f 1 (X 0 ) H,..., f m (X 0 ) H). The matrix of Df (X 0 ) is called the Jacobian matrix of f at X 0 and is denoted by J f (X 0 ).

Jacobian matrix of f : R n R m J f (X 0 ) is an m n matrix with (i, j)-th entry a ij := j f i (X 0 ).

Jacobian matrix of f : R n R m J f (X 0 ) is an m n matrix with (i, j)-th entry a ij := j f i (X 0 ). f 1 (X 0 ) J f (X 0 ) =. f m (X 0 ) m n = [ ] fi (X 0 ). x j m n

Jacobian matrix of f : R n R m J f (X 0 ) is an m n matrix with (i, j)-th entry a ij := j f i (X 0 ). f 1 (X 0 ) J f (X 0 ) =. f m (X 0 ) m n = [ ] fi (X 0 ). x j m n f (x, y) = (f 1 (x, y), f 2 (x, y), f 3 (x, y) J f (a, b) = xf 1 (a, b) y f 1 (a, b) x f 2 (a, b) y f 2 (a, b) x f 3 (a, b) y f 3 (a, b)

Jacobian matrix of f : R n R m J f (X 0 ) is an m n matrix with (i, j)-th entry a ij := j f i (X 0 ). f 1 (X 0 ) J f (X 0 ) =. f m (X 0 ) m n = [ ] fi (X 0 ). x j m n f (x, y) = (f 1 (x, y), f 2 (x, y), f 3 (x, y) J f (a, b) = xf 1 (a, b) y f 1 (a, b) x f 2 (a, b) y f 2 (a, b) x f 3 (a, b) y f 3 (a, b) f (x, y, z) = (f 1 (x, y, z), f 2 (x, y, z)) [ ] x f J f (a, b, c) = 1 (a, b, c) y f 1 (a, b, c) z f 1 (a, b, c) x f 2 (a, b, c) y f 2 (a, b, c) z f 2 (a, b, c)

Examples If f (x, y) = (xy, e x y, sin y) then J f (x, y) = e x y e x y x 0 cos y

Examples If f (x, y) = (xy, e x y, sin y) then J f (x, y) = e x y e x y x 0 cos y If f (x, y, z) = (x + y + z, xyz) then [ 1 1 1 J f (x, y, z) = yz xz xy ]

Chain rule Theorem-A: Let X : R R n be differentiable at t 0 and f : R n R be differentiable at X 0 := X(t 0 ). Then f X is differentiable at t 0 and d dt f (X(t)) t=t 0 = f (X 0 ) X (t 0 ) = n i=1 i f (X 0 ) dx i(t 0 ). dt

Chain rule Theorem-A: Let X : R R n be differentiable at t 0 and f : R n R be differentiable at X 0 := X(t 0 ). Then f X is differentiable at t 0 and d dt f (X(t)) t=t 0 = f (X 0 ) X (t 0 ) = n i=1 Proof: Since f is differentiable at X 0, therefore i f (X 0 ) dx i(t 0 ). dt f (X 0 + H) = f (X 0 ) + f (X 0 ) H + E(H) H ( ) and E(H) 0 as H 0. Put H := X(t) X(t 0 ) in ( ) to complete the proof.

Chain rule Theorem-A: Let X : R R n be differentiable at t 0 and f : R n R be differentiable at X 0 := X(t 0 ). Then f X is differentiable at t 0 and d dt f (X(t)) t=t 0 = f (X 0 ) X (t 0 ) = n i=1 Proof: Since f is differentiable at X 0, therefore i f (X 0 ) dx i(t 0 ). dt f (X 0 + H) = f (X 0 ) + f (X 0 ) H + E(H) H ( ) and E(H) 0 as H 0. Put H := X(t) X(t 0 ) in ( ) to complete the proof. d n Remark: dt f (X(t 0)) = f (X 0 ) X (t 0 ) = i f (X 0 ) dx i(t 0 ) dt i=1 sometimes referred to as total derivative.

Chain rule for partial derivatives Theorem-B: If X : R 2 R n, (u, v) (x 1 (u, v),..., x n (u, v)) has partial derivatives at (a, b) and f : R n R is differentiable at Y := X(a, b) then F(u, v) := f (X(u, v)) has partial derivatives at (a, b) and u F(a, b) = f (Y ) u X(a, b) = v F(a, b) = f (Y ) v X(a, b) = n f (Y ) x j (a, b), x j u j=1 n f (Y ) x j (a, b). x j v j=1

Chain rule for partial derivatives Case n=2: If x = x(u, v) and y = y(u, v) have first order partial derivatives at the point (u, v), and if z = f (x, y) is differentiable at the point (x(u, v), y(u, v)), then z = f (x(u, v), y(u, v)) has first order partial derivatives at (u, v) given by z u = z x x u + z y y u z and v = z x x v + z y y v. Example: Find w/ u and w/ v when w = x 2 + xy and x = u 2 v, y = uv 2.

Graph and level set Let f : R n R. Then G(f ) := {(X, f (X)) : X R n } R n+1 is the graph of f. G(f ) represents a hyper-surface in R n+1.

Graph and level set Let f : R n R. Then G(f ) := {(X, f (X)) : X R n } R n+1 is the graph of f. G(f ) represents a hyper-surface in R n+1. The set S(f, α) := {X R n : f (X) = α} is called a level set of f and represents a hyper-surface in R n.

Graph and level set Let f : R n R. Then G(f ) := {(X, f (X)) : X R n } R n+1 is the graph of f. G(f ) represents a hyper-surface in R n+1. The set S(f, α) := {X R n : f (X) = α} is called a level set of f and represents a hyper-surface in R n. (e) Graph of f (x, y) := x 2 + y 2 (f) Level curve x 2 + y 2 = k

Level sets and gradients Let f : R n R be differentiable at X 0 R n. Suppose that X 0 is point on the hyper-surface f (X) = α.

Level sets and gradients Let f : R n R be differentiable at X 0 R n. Suppose that X 0 is point on the hyper-surface f (X) = α. Let X : ( ε, ε) R n be a curve on the hyper-surface f (X) = α passing through X 0, i.e, X(0) = X 0 and f (X(t)) = α for t ( ε, ε).

Level sets and gradients Let f : R n R be differentiable at X 0 R n. Suppose that X 0 is point on the hyper-surface f (X) = α. Let X : ( ε, ε) R n be a curve on the hyper-surface f (X) = α passing through X 0, i.e, X(0) = X 0 and f (X(t)) = α for t ( ε, ε). Suppose that X(t) is differentiable at 0. Then 0 = df (X(t)) t=0 = f (X 0 ) X (0) f (X 0 ) X (0) dt

Level sets and gradients Let f : R n R be differentiable at X 0 R n. Suppose that X 0 is point on the hyper-surface f (X) = α. Let X : ( ε, ε) R n be a curve on the hyper-surface f (X) = α passing through X 0, i.e, X(0) = X 0 and f (X(t)) = α for t ( ε, ε). Suppose that X(t) is differentiable at 0. Then 0 = df (X(t)) t=0 = f (X 0 ) X (0) f (X 0 ) X (0) dt Since the line X 0 + t X (0) is tangent to the curve X(t) at X 0, f (X 0 ) is normal to the hyper-surface f (X) = α at X 0.

Tangent Plane and Normal Line to a Level Surface f : E R 3 R. X 0 = (x 0, y 0, z 0 ) is a point on the level surface S = {(x, y, z) R 3 : f (x, y, z) = c} where c is a fixed real number. Let f be differentiable at X 0. The tangent plane to S at X 0 is the plane passing through X 0 and normal to the gradient vector f at X 0. Its equation is f x (X 0 )(x x 0 ) + f y (X 0 )(y y 0 ) + f z (X 0 )(z z 0 ) = 0. The normal line to S at X 0 is the line perpendicular to the tangent plane and parallel to f (X 0 ), given by equations x = x 0 + f x (X 0 )t, y = y 0 + f y (X 0 )t, z = z 0 + f z (X 0 )t for t R.

Normal Line and Tangent Plane to a Surface

Example Find the tangent plane and normal line to the surface x 2 + y 2 + z 2 = 3 at the point ( 1, 1, 1). The given surface can be written as a level surface f (x, y, z) = 3 where f (x, y, z) = x 2 + y 2 + z 2. f x (x, y, z) = 2x, f y (x, y, z) = 2y and f z (x, y, z) = 2z f x ( 1, 1, 1) = 2, f y ( 1, 1, 1) = 2, f z ( 1, 1, 1) = 2. Equation of tangent plane: 2(x ( 1)) + 2(y 1) + 2(z 1) = 0 or equivalently, x + y + z = 3. The normal line to the surface at ( 1, 1, 1) is given by x = 1 2t, y = 1 + 2t, z = 1 + 2t for t R.

Tangent Plane and Normal line for a Surface z = f (x, y) The equation for a surface S : z = f (x, y) can be written in the form f (x, y) z = 0. Hence, the surface z = f (x, y) is also the level surface F(x, y, z) = 0 of the function w = F(x, y, z) = f (x, y) z. If X 0 (x 0, y 0, z 0 ) is a point on the surface z = f (x, y) and f x and f y are continuous at (x 0, y 0 ) then the tangent plane to the surface S at X 0 is the plane f x (x 0, y 0 )(x x 0 ) + f y (x 0, y 0 )(y y 0 ) (z z 0 ) = 0 or equivalently z = z 0 + f x (x 0, y 0 )(x x 0 ) + f y (x 0, y 0 )(y y 0 ).

Tangent Plane and Normal line for a Surface z = f (x, y) The normal line to the surface S at P 0 is the line x = x 0 +f x (x 0, y 0 )t, y = y 0 +f y (x 0, y 0 )t, z = z 0 t for t R. Example: Find equations for the tangent plane and normal line to the surface z = 9 x 2 y 2 at the point P 0 (1, 2, 4). Observe that f x (1, 2) = 2 and f y (1, 2) = 4. Therefore, the equation for the tangent plane is ( 2)(x 1)+( 4)(y 2) (z 4) = 0 = 2x+4y+z = 14. The equation for the normal line is x = 1 2t, y = 2 4t, z = 4 t for t R.

Partial Derivatives of Higher Order Let f : R 2 R be defined by f (x, y) = x 4 y + y 3 x for (x, y) R 2. Then f x = 4x 3 y + y 3 f and y = x 4 + 3y 2 x. Now, these first order partial derivatives are again functions from R 2 to R. Again, we can do partial differentiation with respect to the variables x and y. For example, ( ) f = ( 4x 3 y + y 3) = 12x 2 y = 2 f x x x x 2 ( f y x ( f x ) ) = y x ( ) f y y = ( 4x 3 y + y 3) = 4x 3 + 3y 2 = 2 f y y x ( x 4 + 3y 2 x ) = 4x 3 + 3y 2 = 2 f x y = ( x 4 + 3y 2 x ) = 6xy = 2 f y y 2

Notations for Partial Derivatives of Higher Order Let f : S R n R where S is an open set in R n. 2 f = ( ) f = f xi x x j x i x j x j = ij f = xi x j f i 3 f = ( ( )) f = ijk f = f xi x x k x j x i x k x j x j x k i and so on. Note: In the notation f xi x j x k, the variable x i which is close to f is first and then next variable x j and so on. Mixed Partial Derivatives: The partial derivatives ij and ji with i j are called the mixed (partial) derivatives.

ij f (X 0 ) ji f (X 0 ) is possible Let f (x, y) = Prove that { xy(x 2 y 2 ) x 2 +y 2 if (x, y) (0, 0), 0 if (x, y) = (0, 0). f, f x, f y are continuous in R 2. 12 f and 21 f exist at every point of R 2, and are continuous except at (0, 0). 12 f (0, 0) = 1 and 21 f (0, 0) = 1 (Second Order Mixed Partial Derivatives are not equal at origin). If X 0 = (x 0, y 0 ) (0, 0) then 12 f (X 0 ) = 21 f (X 0 ).

When mixed partial derivatives are equal? Theorem Let f : S R n R where S is an open set in R n. Let X 0 S. Assume that ij f and ji f exist in a neighborhood of the point X 0. If the mixed partial derivatives ij f and ji f are continuous at X 0 then ij f (X 0 ) = ji f (X 0 ).

Continuous partial derivatives Let S be an open subset of R n and f : S R n R. Suppose that i f (X) exists for all X S and i = 1,..., n. Then each i f defines a function on S. If i f : S R, X i f (X) is continuous for i = 1,..., n then f is said to be continuously differentiable on S (in short, C 1 ).

Continuous partial derivatives Let S be an open subset of R n and f : S R n R. Suppose that i f (X) exists for all X S and i = 1,..., n. Then each i f defines a function on S. If i f : S R, X i f (X) is continuous for i = 1,..., n then f is said to be continuously differentiable on S (in short, C 1 ). Fact: f is C 1 continuous. f : S R n R n, X f (X) is Recall: f is C 1 f is differentiable f is C 1.

Examples: Consider f : R 2 R given by f (x, y) = x 2 + e xy + y 2. Then f is C 1. Consider f : R 2 R given by f (0, 0) = 0 and f (x, y) := (x 2 + y 2 ) sin(1/(x 2 + y 2 )) if (x, y) (0, 0). Then f is differentiable but NOT C 1.

Continuous partial derivatives Let S be an open subset of R n. Let f : S R n R be differentiable so that i f : S R for i = 1,..., n. If the partial derivatives of j f exist at X 0 S for j = 1,..., n, that is, i j f (X 0 ) exists for i, j = 1, 2,..., n, then f is said to have second order partial derivatives at X 0.

Continuous partial derivatives Let S be an open subset of R n. Let f : S R n R be differentiable so that i f : S R for i = 1,..., n. If the partial derivatives of j f exist at X 0 S for j = 1,..., n, that is, i j f (X 0 ) exists for i, j = 1, 2,..., n, then f is said to have second order partial derivatives at X 0. f is said to be C 2 (twice continuously differentiable) if i j f (X) exists for X S and i j f : S R is continuous for i, j = 1, 2,..., n. p-th order partial derivatives of f are defined similarly.

Continuous partial derivatives Fact: f : R n R is C 2 f : R n R n is differentiable. Hessian: Suppose that f (x, y) has second order partial derivatives at X 0 = (a, b). Then the matrix [ ] [ ] x H f (X 0 ) := x f (X 0 ) y x f (X 0 ) fxx (X = 0 ) f xy (X 0 ) x y f (X 0 ) y y f (X 0 ) f yx (X 0 ) f yy (X 0 ) is called the Hessian of f at X 0.

Hessian Fact: Suppose that f : R n R is C 2 and X 0 R n. Then the Hessian 1 1 f (X 0 ) n 1 f (X 0 ) H f (X 0 ) :=.. 1 n f (X 0 ) n n f (X 0 ) is symmetric. Also H f (X 0 ) = J f (X 0 ) = Jacobian of f at X 0.

Hessian Fact: Suppose that f : R n R is C 2 and X 0 R n. Then the Hessian 1 1 f (X 0 ) n 1 f (X 0 ) H f (X 0 ) :=.. 1 n f (X 0 ) n n f (X 0 ) is symmetric. Also H f (X 0 ) = J f (X 0 ) = Jacobian of f at X 0. Example: Consider f (x, y) = x 2 2xy + 2y 2. Then [ ] [ ] fxx (x, y) f H f (x, y) = xy (x, y) 2 2 =. f yx (x, y) f yy (x, y) 2 4

Mean Value Theorem Let f : S R n R where S is an open and convex set in R n. If f is differentiable in S then for any two points X 1 and X 2 in S, there exists a point X 0 on the line segment L joining X 1 and X 2 such that f (X 2 ) f (X 1 ) = f (X 0 ) (X 2 X 1 ). Proof: Let Φ : R R n be defined by Φ(t) = (1 t)x 1 + tx 2 for t R. Consider the function g(t) = f (Φ(t)) for t [0, 1] and invoke chain rule and apply the mean value theorem of single variable calculus to g.

Taylor s Theorem We can also write the MVT in the following way: Let f : S R be differentiable. Let X 0 S. Then there exists 0 < θ < 1 such that f (X 0 + H) = f (X 0 ) + n i f (X 0 + θh)h i. Let f : S R n R where S is an open set in R n. Let X 0 = (x 1,, x n ) S and let H = (h 1,, h n ) R n be such that the line segment L joining X 0 and X 0 + H lies inside S. Suppose that f and its partial derivatives through order (n + 1) are continuous in S. i=1

Taylor s Theorem Then, for some 0 < θ < 1, we have f (X 0 + H) = f (X 0 ) + n 1 k! (h 1 1 + + h n n ) k f k=1 1 + (n + 1)! (h 1 1 + + h n n ) n+1 f X0 +θh Case n=2: ( Let f : R 2 R. Set the differential operators as: F = h x + k ), y ( F = h x + k ) 2 = h 2 2 y x + 2hk 2 2 x y + k 2 2 y and ( 2 F (n) = h x + k ) n (Expand it by binomial theorem). y. X0

Taylor s Formula: Suppose f (x, y) and its partial derivatives through order (n + 1) are continuous throughout an open rectangular region R centered at (a, b) in R 2. Then, throughout R, f (a+h, b+k) = f (a, b)+ n r=1 ( 1 h r! x + k ) r f (a, b)+remainder term y The Remainder term is ( 1 h (n + 1)! x + k ) n+1 f (a+ch, b+ck) y where ((a + ch), (b + ck)) for some c (0 < c < 1), is a point on the line segment joining (a, b) and (a + h, b + k).

Taylor s formula and Polynomial Approximations Taylor s formula provides polynomial approximations of f near the point (a, b). The first n derivative terms yield the n-the degree polynomial approximation to f and the last term (remainder term) gives the approximation error. Therefore, an upper bound of the remainder term is called as an upper bound for error term while approximating f by the n-th degree polynomial. If n = 1, we get linear approximation of f near (a, b). If n = 2, we get quadratic approximation of f near (a, b). If n = 3, we get cubic approximation of f near (a, b).

Example Let f (x, y) = 1 if xy 0. Let (a, b) = (1, 1). Compute xy the first two terms in the Taylor s formula of f near (1, 1). f x = x 2 y 1, f y = x 1 y 2, f xx = 2x 3 y 1, f xy = x 2 y 2 and f yy = 2x 1 y 3. ( 1 h 2 = 1 + (h k) + (1 + h)( 1 + k) (1 + θh) 3 ( 1 + θk) ) hk + (1 + θh) 2 ( 1 + θk) + k 2 2 (1 + θh)( 1 + θk) 3 where 0 < θ < 1.

Maxima/Minima Let f : E R n R. If f is continuous on E and E is a closed & bounded, then f attains its maximum and minimum value on E. How to find the (extremum) points at which f attains the maximum value or the minimum value on E? Before finding answer to this question, we formally define maximum and minimum of f.

Local minimum/maximum and Global minimum/maximum Let f : E R n R. A point X E is said to be a point of relative/local minimum of f if there exists a r > 0 such that f (X ) f (X) for all X E with X X < r. In such case, the value f (X ) is called the relative/local minimum of f.

Local minimum/maximum and Global minimum/maximum Let f : E R n R. A point X E is said to be a point of relative/local minimum of f if there exists a r > 0 such that f (X ) f (X) for all X E with X X < r. In such case, the value f (X ) is called the relative/local minimum of f. A point X E is said to be a point of relative/local maximum of f if there exists a r > 0 such that f (X) f (X ) for all X E with X X < r. In such case, the value f (X ) is called the relative/local maximum of f.

Local minimum/maximum and Global minimum/maximum Let f : E R n R. A point X E is said to be a point of relative/local minimum of f if there exists a r > 0 such that f (X ) f (X) for all X E with X X < r. In such case, the value f (X ) is called the relative/local minimum of f. A point X E is said to be a point of relative/local maximum of f if there exists a r > 0 such that f (X) f (X ) for all X E with X X < r. In such case, the value f (X ) is called the relative/local maximum of f. A point X E is said to be a point of absolute/ global minimum of f if f (X ) f (X) for all X E.

Local minimum/maximum and Global minimum/maximum Let f : E R n R. A point X E is said to be a point of relative/local minimum of f if there exists a r > 0 such that f (X ) f (X) for all X E with X X < r. In such case, the value f (X ) is called the relative/local minimum of f. A point X E is said to be a point of relative/local maximum of f if there exists a r > 0 such that f (X) f (X ) for all X E with X X < r. In such case, the value f (X ) is called the relative/local maximum of f. A point X E is said to be a point of absolute/ global minimum of f if f (X ) f (X) for all X E. A point X E is said to be a point of absolute/ global maximum of f if f (X) f (X ) for all X E.

Extremum Points and Extremum Values A point X E is said to be a point of extremum of f if it is either (local/global) minimum point or maximum point of f. The function value f (X ) at the extremum point X is called an extremum value of f.

Absolute/Global Extremum f (x, y ) = x 2 y 2 has an absolute maximum at (0, 0) in R2. f (x, y ) = x 2 + y 2 has an absolute minimum at (0, 0) in R2. R. Barman & S. Bora MA-102 (2017)

Relative/Local Extremum and Saddle Point

Critical Points Let f : E R n R. A point X E is said to be a critical point of f if either f (X ) = f (X ) = = f (X ) = 0, x 1 x 2 x n or at least one of the first order partial derivatives of f does not exist. The function value f (X ) at the critical point X is called a critical value of f.

Critical Points Let f : E R n R. A point X E is said to be a critical point of f if either f (X ) = f (X ) = = f (X ) = 0, x 1 x 2 x n or at least one of the first order partial derivatives of f does not exist. The function value f (X ) at the critical point X is called a critical value of f. Examples: The point (0, 0) is the critical point of the function f (x, y) = x 2 + y 2 and the critical value is 0 corresponding to this critical point. The point (0, 0) is a critical point of the function h(x, y) = x sin(1/x) + y if x 0 and h(x, y) = y if x = 0.

Saddle Points Let X be a critical point of f. If every neighborhood N(X ) of the point X contains points at which f is strictly greater than f (X ) and also contains points at which f is strictly less than f (X ). That is, f attains neither relative maximum nor relative minimum at the critical point X. Example: Let f (x, y ) = y 2 x 2 for (x, y ) R2. Then (0, 0) is a saddle point of f. R. Barman & S. Bora MA-102 (2017)

To find the extremum points of f, where to look for? If f : E R n R, then where to look in E for extremum values of f? The maxima and minima of f can occur only at boundary points of E, critical points of E interior point of E where all the first order partial derivatives of f are zero, interior point of f where at least one of the first order partial derivatives of f does not exist. Example: In the closed rectangle R = {(x, y) R 2 : 1 x 1 and 1 y 1}, the function f (x, y) = x 2 + y 2 attains its minimum value at (0, 0), its maximum value at (±1, ±1).

Necessary Condition for Extremum Let f : E R n R. If an interior point X of E is a point of relative/absolute extremum of f, and if the first order partial derivatives of f at X exists then 1 f (X ) = = n f (X ) = 0. That is, the gradient vector at X is the zero vector. Further, the directional derivative of f at X in all directions is zero, if f is differentiable at X.