Differentiation in higher dimensions

Similar documents
Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Combining functions: algebraic methods

How to Find the Derivative of a Function: Calculus 1

Continuity and Differentiability Worksheet

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

Math 161 (33) - Final exam

7.1 Using Antiderivatives to find Area

3.4 Worksheet: Proof of the Chain Rule NAME

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

HOMEWORK HELP 2 FOR MATH 151

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

2.8 The Derivative as a Function

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Exam 1 Review Solutions

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

Function Composition and Chain Rules

Numerical Differentiation

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

Notes on Vector Calculus. Dinakar Ramakrishnan

2.3 Algebraic approach to limits

Analytic Functions. Differentiable Functions of a Complex Variable

MA455 Manifolds Solutions 1 May 2008

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

3.1 Extreme Values of a Function

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Poisson Equation in Sobolev Spaces

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

The derivative function

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Lesson 6: The Derivative

158 Calculus and Structures

1 Solutions to the in class part

Chapter 1D - Rational Expressions

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

MATH1131/1141 Calculus Test S1 v8a

MVT and Rolle s Theorem

University Mathematics 2

Section 15.6 Directional Derivatives and the Gradient Vector

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

UNIVERSITY OF MANITOBA DEPARTMENT OF MATHEMATICS MATH 1510 Applied Calculus I FIRST TERM EXAMINATION - Version A October 12, :30 am

232 Calculus and Structures

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Gradient Descent etc.

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Practice Problem Solutions: Exam 1

Continuity and Differentiability of the Trigonometric Functions

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

Exercises for numerical differentiation. Øyvind Ryan

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

f a h f a h h lim lim

2.3 Product and Quotient Rules

MAT 1339-S14 Class 2

Copyright c 2008 Kevin Long

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

Section 3.1: Derivatives of Polynomials and Exponential Functions

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

MATH 1A Midterm Practice September 29, 2014

The Derivative as a Function

Math 242: Principles of Analysis Fall 2016 Homework 7 Part B Solutions

Differentiation: Our First View

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

Chapter 4: Numerical Methods for Common Mathematical Problems

Test 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 =

4.2 - Richardson Extrapolation

THE IMPLICIT FUNCTION THEOREM

Finding and Using Derivative The shortcuts

5.1 We will begin this section with the definition of a rational expression. We

Derivatives. By: OpenStaxCollege

Math 1210 Midterm 1 January 31st, 2014

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Subdifferentials of convex functions

Continuity. Example 1

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.

Higher Derivatives. Differentiable Functions

Functions of the Complex Variable z

Time (hours) Morphine sulfate (mg)

Dynamics and Relativity

7 Semiparametric Methods and Partially Linear Regression

1watt=1W=1kg m 2 /s 3

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

MA119-A Applied Calculus for Business Fall Homework 4 Solutions Due 9/29/ :30AM

Name: Answer Key No calculators. Show your work! 1. (21 points) All answers should either be,, a (finite) real number, or DNE ( does not exist ).

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Derivatives of Exponentials

Polynomial Interpolation

THE INVERSE FUNCTION THEOREM

Function Composition and Chain Rules

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives)

Rules of Differentiation

Section 3: The Derivative Definition of the Derivative

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER /2019

Introduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =

Differential Calculus (The basics) Prepared by Mr. C. Hull

MTH-112 Quiz 1 Name: # :

Sin, Cos and All That

The complex exponential function

Transcription:

Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends to a finite it, denoted f (a), as tends to 0. Tere are two possible ways to generalize tis for vector fields f : D R m, D R n, for points a in te interior D 0 of D. (Te interior of a set X is defined to be te subset X 0 obtained by removing all te boundary points. Since every point of X 0 is an interior point, it is open.) Te reader seeing tis material for te first time will be well advised to stick to vector fields f wit domain all of R n in te beginning. Even in te one dimensional case, if a function is defined on a closed interval [a, b], say, ten one can properly speak of differentiability only at points in te open interval (a, b). Te first ting one migt do is to fix a vector v in R n and say tat f is differentiable along v iff te following it makes sense: 1 (f(a + v) f(a)). 0 Wen it does, we write f (a; v) for te it. Note tat tis definition makes sense because a is an interior point. Indeed, under tis ypotesis, D contains a basic open set U containing a, and so a + v will, for small enoug, fall into U, allowing us to speak of f(a + v). Tis 1

derivative beaves exactly like te one variable derivative and as analogous properties. For example, we ave te following Mean Value Teorem Assume f (a + tv; v) exists for all 0 t 1. Ten t 0 [0, 1] suc tat f (a + t 0 v; v) = f(a + v) f(a). Proof. Put φ(t) = f(a + tv). By ypotesis, φ is differentiable at every t in [0, 1], and φ (t) = f (a + tv; v). By te one variable mean value teorem, tere exists a t 0 suc tat Done. φ (t 0 ) = φ(1) φ(0) 1 = φ(1) φ(0) = f(a + v) f(a). Wen v is a unit vector, f (a; v) is called te directional derivative of f at a in te direction of v. Te disadvantage of tis construction is tat it forces us to study te cange of f in one direction at a time. So we revisit te one-dimensional definition and note tat te condition for differentiability ( tere is equivalent) to requiring tat tere exists a constant c (= f (a)), f(a + ) f(a) c suc tat = 0. If we put L() = f (a), ten L : R R is 0 clearly a linear map. We generalize tis idea in iger dimensions as follows: Definition. Let f : D R m (D R n ) be a vector field and a an interior point of D. Ten f is differentiable at x = a if and only if tere exists a linear map L : R n R m suc tat ( ) f(a + u) f(a) L(u) u 0 u = 0. Note tat te norm denotes te lengt of vectors in R m in te numerator and in R n in te denominator; tis sould not lead to any confusion, owever. Lemma 1 Suc an L, if it exists, is unique. Proof. Suppose we ave L, M : R n R m satisfying (*) at x = a. Ten L(u) M(u) u 0 u L(u) + f(a) f(a + u) + (f(a + u) f(a) M(u)) = u 0 u L(u) + f(a) f(a + u) u 0 u f(a + u) f(a) M(u) + = 0. u 0 u 2

Pick any non-zero v R n, and set u = tv, wit t R. Ten, te linearity of L, M implies tat L(tv) = tl(v) and M(tv) = tm(v). Consequently, we ave L(tv) M(tv) t 0 tv = 0 t L(v) M(v) = t 0 t v = 1 L(v) M(v). v Ten L(v) M(v) must be zero. Definition. If te it condition ( ) olds for a linear map L, we call L te total derivative of f at a, and denote it by T a f. It is mind boggling at first to tink of te derivative as a linear map. A natural question wic arises immediately is to know wat te value of T a f is at any vector v in R n. We will sow in section 2.3 tat tis value is precisely f (a; v), tus linking te two generalizations of te one-dimensional derivative. Sometimes one can guess wat te answer sould be, and if (*) olds for tis coice, ten it must be te derivative by uniqueness. Here are four examples wic illustrate tis. (1) Let f be a constant vector field, i.e., tere exists a vector w R m suc tat f(x) = w, for all x in te domain D. Ten we claim tat f is differentiable at any a D 0 wit derivative zero. Indeed, if we put L(u) = 0, for any u R n, ten (*) is satisfied, because f(a + u) f(a) = w w = 0. (2) Let f be a linear map. Ten we claim tat f is differentiable everywere wit T a f = f. Indeed, if we put L(u) = f(u), ten by te linearity of f, f(a + u) f(a) L(u) will be zero for any u R n, so tat (*) olds trivially. (3) Let f(x) =< x, x >= x 2. Ten f(a + u) f(a) = < a + u, a + u > < a, a >=< u, a > + < a, u > + < u, u > =2 < a, u > + u 2 using te linearity of <.,. > in eac argument as well as its symmetry. L(u) = 2 < a, u > we ave Defining L by f(a + u) f(a) L(u) u 3 = u 2 u = u

and tis tends to zero as u tends to zero. Identifying linear maps from R n to R wit row vectors we get T a f = 2a t. (4) Here is anoter variation on te teme tat te derivative of x 2 is 2x. Let f(x) = X 2 = X X were X is an n n-matrix and te denotes matrix multiplication. So f is a function from R n2 to R n2 were we view te space of n n-matrices as R n2. Again just using bilinearity of matrix multiplication we find f(a + U) f(a) = A U + U A + U 2. Using te fact (not proven in tis class) tat XY X Y for matrices X and Y were X = i,j X i,j 2 we find tat T A f is te linear map U A U + U A. But tis time we cannot rewrite tis as 2 A U since matrix multiplication is not commutative. Tis concludes our list of examples were we can sow directly tat T a f exists. Teorem 1 (d) below will give a powerful criterion for te existence of T a f in many more examples. Before we leave tis section, it will be useful to take note of te following: Lemma 2 Let f 1,..., f m be te component (scalar) fields of f. Ten f is differentiable at a iff eac f i is differentiable at a. An easy consequence of tis lemma is tat, wen n = 1, f is differentiable at a iff te following familiar looking it exists in R m : 0 f(a + ) f(a), allowing us to suggestively write f (a) instead of T a f. Clearly, f (a) is given by te vector (f 1(a),..., f m(a)), so tat (T a f)() = f (a), for any R. Proof. Let f be differentiable at a. For eac v R n, write L i (v) for te i-t component of (T a f)(v). Ten L i is clearly linear. Since f i (a + u) f i (u) L i (u) is te i-t component of f(a + u) f(a) L(u), te norm of te former is less tan or equal to tat of te latter. Tis sows tat (*) olds wit f replaced by f i and L replaced by L i. So f i is differentiable for any i. Conversely, suppose eac f i differentiable. Put L(v) = ((T a f 1 )(v),..., (T a f m )(v)). Ten L is a linear map, and by te triangle inequality, f(a + u) f(a) L(u) m f i (a + u) f i (a) (T a f i )(u). i=1 It follows easily tat (*) exists and so f is differentiable at a. 4

2.2 Partial Derivatives Let {e 1,..., e n } denote te standard basis of R n. Te directional derivatives along te unit vectors e j are of special importance. Definition. Let j n. Te jt partial derivative of f at x = a is f (a; e j ), denoted by f (a) or D j f(a). Just as in te case of te total derivative, it can be sown tat f (a) exists iff f i (a) exists for eac coordinate field f i. Example: Define f : R 3 R 2 by f(x, y, z) = (e xsin(y), zcosy). All te partial derivatives exist at any a = (x 0, y 0, z 0 ). We will sow tis for f y it to te reader to ceck te remaining cases. Note tat and leave 1 f(a + e 2) f(a) = ( ex0sin(y0+) e x0sin(y0) cos(y 0 + ) cos(y 0 ), z 0 ). We ave to understand te it as goes to 0. Ten te metods of one variable calculus sow tat te rigt and side tends to te finite it (x 0 cos(y 0 )e x 0sin(y 0 ), z 0 sin(y 0 )), wic is f (a). In effect, te partial derivative wit respect to y is calculated like a one variable y derivative, keeping x and z fixed. Let us note witout proof tat f x (a) is (sin(y 0)e x 0sin(y 0 ), 0) and f z (a) is (0, cosy 0). It is easy to see from te definition tat f (a; tv) equals tf(a; v), for any t R. We also ave te following Lemma 3 Suppose te derivatives of f along any v R n exist near a and are continuous at a. Ten f (a; v + v ) = f (a; v) + f (a; v ), for all v, v in R n. In particular, te directional derivatives of f are all determined by te n partial derivatives. 5

Proof. If φ, ψ are functions of R, let us write φ() ψ() iff 0 φ() ψ() Ceck tat is an equivalence relation. Ten by definition, we ave, for all a D 0 and u in R n, f(a + u) f(a) + f (a; u). = 0. Ten f(a + (v + v )) is equivalent to f(a) + f (a; v + v ) on te one and, and to f(a + v) + f (a + v; v ) f(a) + (f (a; v) + f (a + v; v )), on te oter. Moreover, te continuity ypotesis sows tat f (a + v; v ) tends to f (a; v ) as goes to 0. Consequently, we get te equivalence of f (a; v + v ) wit f (a; v) + f (a; v ). Since tey are independent of, tey must in fact be equal. Finally, since {e j j n} is a basis of R n, we can write any v as j α je j, and by wat we ave just sown, f (a : v) is determined as j α f j (a). In te next section we will sow tat te conclusion of tis lemma remains valid witout te continuity ypotesis if we assume instead tat f as a total derivative at a. Te gradient of a scalar field g at an interior point a of its domain in R n is defined to be te following vector in R n : ( g g(a) = grad g(a) = (a),..., g ) (a). x 1 x n Given a vector field f as above, we can ten put togeter te gradients of its component fields f i, 1 i m, and form te following important matrix, called te Jacobian matrix at a: ( ) fi Df(a) = (a) M m,n (R). 1 i m,1 j n Te i-t row is given by f i (a), wile te j-t column is given by f (a). 2.3 Te main teorem In tis section we collect te main properties of te total and partial derivatives. 6

Teorem 1 Let f : D R m be a vector field, and a an interior point of its domain D R n. (a) If f is differentiable at a, ten for any vector v in R n, In particular, since T a f is linear, we ave for all v, v in R n and α, β in R. (T a f)(v) = f (a, v). f (a; αv + βv ) = αf (a; v) + βf (a; v ), (b) Again assume tat f is differentiable. Ten te matrix of te linear map T a f relative to te standard bases of R n, R m is simply te Jacobian matrix of f at a. (c) f differentiable at a f continuous at a. (d) Suppose all te partial derivatives of f exist near a and are continuous at a. Ten T a f exists. (e) (cain rule) Consider R n f R m g a b = f(a) R k. Suppose f is differentiable at a and g is differentiable at b = f(a). Ten te composite function = g f is differentiable at a and moreover, T a = T b g T a f. In terms of te Jacobian matrices, tis reads as were indicates a matrix product. D(a) = Dg(b) Df(a) M k,n (R) (f) Assume T a f and T a g exist. Ten T a (f + g) exists and T a (f + g) = T a f + T a g (additivity) Assume (m = 1), i.e. f, g are scalar fields, differentiable at a. Ten (i) T a (fg) = f(a)t a g + g(a)t a f (product rule) (ii) T a ( f g ) = g(a)t af f(a)t a g g(a) 2 if g(a) 0 (quotient rule) 7

Te following corollary is an immediate consequence of te teorem, wic we will make use of in te next capter on normal vectors and extrema. Corollary 1 Let g be a scalar field, differentiable at an interior point b of its domain D in R n, and let v be any vector in R n. Ten we ave g(b) v = f (b; v). Furtermore, let φ be a function from a subset of R into D R n, differentiable at an interior point a mapping to b. Put = g φ. Ten is differentiable at a wit (a) = g(b) φ (a). Here is a simple observation before we begin te proof. Let f : R 2 R 2 be a vector field suc tat f 1 (x, y) = φ(x), f 2 (x, y) = ψ(y), wit φ, ψ differentiable ( everywere. ) Ten, φ clearly, te Jacobian matrix Df(x, y) is te diagonal matrix (x) 0 0 ψ. Conversely, (y) ( ) µ(x) 0 suppose we know apriori tat Df is diagonal (at all points), say Df(x, y) =. 0 ν(y) Ten f 1 = µ(x), f 1 = 0 = f 2, f 2 = ν(y) f x y x y 1(x, y) = µ(x) dx; f 2 (x, y) = ν(y) dy. So f 1 is independent of y and f 2 is independent of x. Proof of main teorem. (a) It suffices to sow tat (T a f i )(v) = f i (a; v) for eac i n and tis is clear if v = 0 (bot sides are zero by definition). So assume v 0. By definition, f i (a + u) f i (a) (T a f i )(u) u 0 u Tis means tat we can write for u = v, R, f i (a + v) f i (a) (T a f i )(v) 0 v multiply by v and deduce te existence of f (a; v) = 0 f i (a + v) f i (a) = 0 = 0, = 0 (T a f i )(v) = T a f i (v). (b) By part (a), eac partial derivative exists at a (since f is assumed to be differentiable at a). Te matrix of te linear map T a f is determined by te effect on te standard basis vectors. Let {e i 1 i m} denote te standard basis in R m. Ten we ave, by definition, (T a f)(e j ) = m (T a f i )(e j )e i = i=1 8 m i=1 f i (a)e i.

Te matrix obtained is easily seen to be te Jacobi matrix Df(a). (c) Suppose f is differentiable at a. Tis certainly implies tat te it of te function f(a + u) f(a) (T a f)(u), as u tends to 0 R n, is 0 R m (from te very definition of T a f, f(a + u) f(a) (T a f)(u) tends to zero faster tan u, in particular it tends to zero). Since T a f is linear, T a f is continuous (everywere), so tat u 0 (T a f)(u) = 0. Hence u 0 f(a + u) = f(a) wic means tat f is continuous at a. (d) By ypotesis, all te partial derivatives exist near a = (a 1,..., a n ) and are continuous tere. Write u = (u 1,..., u n ) and define a linear map L by L(u) = n u j f (a). We can write n f(a + u) f(a) = (φ j (a j + u j ) φ j (a j )), were eac φ j is a one variable function (depending on u) defined by φ j (t) = f(a 1 + u 1,..., a j 1 + u j 1, t, a j+1,..., a n ). By te mean value teorem, φ j (a j + u j ) φ j (a j ) u j = φ j(t j (u)) = f (y j (u)), for some t j (u) [a j, a j + u j ], wit y j (u) = (a 1 + u 1,..., a j 1 + u j 1, t j (u), a j+1,..., a n ). Putting tese togeter, we see tat it suffices to sow tat te following it is zero: 1 n ( f u 0 u u j (a) f ) (y j (u)). Clearly, u j u, for eac j. So it follows, by te triangle inequality, tat tis it is bounded above by te sum over j of f (a) f (y j (u)), wic is zero by te continuity u 0 of te partial derivatives at a. Here we are using te fact tat eac y j (u) approaces a as u = (u 1,..., u n ) goes to 0. Done. (e) First we need te following simple 9

Lemma 4 Let T : R n R m be a linear map. Ten, c > 0 suc tat T v c v for any v R n. Proof of Lemma. Let A be te matrix of T relative to te standard bases. Put C = max j { T (e j ) }. If v = n α j e j, ten T (v) = n α j T (e j ) C α j 1 j n n C( α j 2 ) 1/2 ( 1) 1/2 C n v, by te Caucy Scwarz inequality. We are done by setting c = C n. Note tat te Lemma implies tat linear maps T are continuous. Te optimal coice of c is T v c = sup{ v R n \ {0}} = sup{ T v v = 1} v Note tat te Lemma implies tat te first set is bounded so te sup exists. Te second set is even compact so te sup is attained, i.e. tere is always a vector v of Norm one for wic T v is te optimal constant c. Proof of (e) (contd.). Write L = T a f, M = T b g, N = M L. To sow: T a = N. Define F (x) = f(x) f(a) L(x a), G(y) = g(y) g(b) M(y b) and H(x) = (x) (a) N(x a). Ten we ave So we need to sow: But Since L(x a) = f(x) f(a) F (x), we get F (x) x a x a = 0 = G(y) y b y b. H(x) x a x a = 0. H(x) = g(f(x)) g(b) M(L(x a)) H(x) = [g(f(x)) g(b) M(f(x) f(a))] + M(F (x)) = G(f(x)) + M(F (x)). Terefore it suffices to prove: 10

(i) x a G(f(x)) x a (ii) x a M(F (x)) x a = 0 and = 0. By Lemma 4, we ave M(F (x)) c F (x), for some c > 0. Ten c x a F (x) x a = 0, yielding (ii). M(F (x)) x a G(y) On te oter and, we know = 0. So we can find, for every ɛ > 0, a y b y b δ > 0 suc tat G(f(x)) < ɛ f(x) b if f(x) b < δ. But since f is continuous, f(x) b < δ wenever x a < δ 1, for a small enoug δ 1 > 0. Hence G(f(x)) < ɛ f(x) b = ɛ F (x) + L(x a) F (x) by te triangle inequality. Since x a x a ɛ F (x) + ɛ L(x a), is zero, we get G(f(x)) x a x a L(x a) ɛ. x a x a Applying Lemma 4 again, we get L(x a) c x a, for some c > 0. Now (i) follows easily. (f) (i) We can tink of f +g as te composite = s(f, g) were (f, g)(x) = (f(x), g(x)) and s(u, v) = u + v ( sum ). Set b = (f(a), g(a)). Applying (e), we get T a (f + g) = T b (s) T a (f, g) = T a (f) + T b (g). Done. Te proofs of (ii) and (iii) are similar and will be left to te reader. QED. Remark. It is important to take note of te fact tat a vector field f may be differentiable at a witout te partial derivatives being continuous. We ave a counterexample already wen n = m = 1 as seen by taking ( ) 1 f(x) = x 2 sin if x 0, x 11

and f(0) = 0. Tis is differentiable everywere. Te only question is at x = 0, were te f() relevant it is clearly zero, so tat f (0) = 0. But for x 0, we ave by te 0 product rule, ( ) ( ) 1 1 f (x) = 2xsin cos, x x wic does not tend to f (0) = 0 as x goes to 0. So f is not continuous at 0. 2.4 Mixed partial derivatives Let f be a scalar field, and a an interior point in its domain D R n. For j, k n, we may consider te second partial derivative 2 f (a) = ( ) f (a), x k x k wen it exists. It is called te mixed partial derivative wen j k, in wic case it is of interest to know weter we ave te equality (3.4.1) 2 f x k (a) = 2 f x k (a). 2 f Proposition 1 Suppose x k Ten te equality (3.4.1) olds. and 2 f x k bot exist near a and are continuous tere. Te proof is similar to te proof of part (d) of Teorem 1. 12