Differentiable Functions

Size: px
Start display at page:

Download "Differentiable Functions"

Transcription

1 Differentiable Functions Let S R n be open and let f : R n R. We recall that, for x o = (x o 1, x o,, x o n S the partial derivative of f at the point x o with respect to the component x j is defined as f(x o x j f(x o 1, x o,,, x o j 1, x o j + h, x o j+1,, x o n := lim, h h provided this limit exists. If this limit exists for the points x S, then we can differentiate the resulting function x f(x with respect to any of the components of x to obtain x j f(x x k x j, the second partial derivative of f. In particular, if f i j we refer to this second partial derivative as the mixed partial derivative. An important property of this mixed partial derivative is that ( ( f(x = f(x, x k x j x j x k provided these second derivatives exist and are continuous. Higher order derivitives are defined in a similar manner. A real-valued function f : S R will be said to be of class C (k on the open set S provided it is continuous and posesses continuous partial derivatives of all orders up to and including k. It will be said to be of class C ( on S if it is of class C (k for all integers k. It will be said to be of class C (k on an arbitrary set S provided it is of class C (k on a neighborhood of that set. We will also use the notation C and D for the classes C (1 and C ( respectively. Alternate notations for the partial derivatives will also be used, for example, f xi = f x j, f xk,x j = ( f, etc. x k x j The notion of differentiability of functions of several variables is related to the existence of partial derivatives but is not coincident with the existence of the partials. Indeed, we have the following definition: A function f : S R m where S R n is an open set, is said to be differentiable at a point x o S provided there is a linear transformation L : R n R m such that, for all h R n f(x o + h f(x o L h lim h h =. In the case that such a linear transformation L exists, it is called the derivative (sometimes the Fréchet derivative or the differential of the function f at the point x o. There are 1

2 various notations for the differential. Since the linear transformation depends on the point x o we may denote it as L = L(f; x o when we need to be specific. Other notations will be f (x o, h = L(f; x o (h. Now, in the case that f : S R, the linear transformation is a linear map from R n to R and is therefore called a a linear functional. This linear functional can be realized by the application of a dot product. Given any fixed vector z R n it is clear that the map of R n R given by l z (h := z h = z h is a linear map, a fact which follows from the elementary properties of the dot product. On the other hand, it is well known that, given the standard basis of unit vectors, every linear transformation is realized as a 1 n matrix. So that, if y denotes this matrix, the linear functional is given by y h. In other words there is a one-to-one correspondence between vectors in R n and linear transformations from R n to R. It is shown in advanced calculus texts that if a real-valued function is differentiable on an open set, the partial derivatives exist and that the linear functional that defines the derivitive is given by the map h ( f x 1, f x,, f x n (h 1, h,, h n = f xi h i. 1 Here we have supressed the dependence on the point x o. The vector ( f x 1, f x,, f x n is called the gradient of f and is written variously as grad f or f. Hence we write the differential as f (x o ; h = h f(x o. As a simple example in R 3, suppose that f(x, y, z := x +3 y + z. Then grad f(x, y, z = (x, 6y, 4z so that, for example, at the point x o = (1,, 1, grad f(1, 1 = (,, 4. Note that the differential at this point is the map h f(1,, 1 h or (h 1, h, h 3 h 1 + 4h. i=1 If the function f is of class C ( then the second partial derivatives are defined and continuous. We can consider the map from S R n given by f and ask for its derivative. This derivative is again a linear functional defined on R n. It can be shown that this second derivative in then a bilinear form on R n R n which can be realized in terms of a matrix, represented relative to the standard basis as 1 Conversely, it can be shown that if the partial derivatives exist at a point x o and are all continuous in a neighborhood of that point, then the function f is differentiable at x o.

3 Q(x o := f(x o x 1 x 1 f(x o x 1 x... f(x o f(x o x x 1 x x f(x o f(x o x n x 1 x n x... f(x o x 1 x n f(x o x x n. f(x o x n x n, which, since the second partial derivatives are continuous, is a symmetric matrix. The second differential, or second Fréchet derivative of the function is then given by f (x o ; h, k := k Qh. The matrix Q is referred to as the Hessian matrix of f. Clearly the mapping of R n R n R given by (h, k f (x o ; h, k = k Qh is a bilinear form, a form that is linear in h for every fixed k and linear in k for each fixed h. We note that the values of this form are completely determined by the values of f (x o ; h, h for h R n. This can be seen by the following computation which is reminicent of the binomial theorem. and hence f (x o ; h + k, h + k = f (x o ; h, h + k + f (x o ; k, h + k = f (x o ; h, h + f (x o ; h, k + f (x o ; k, k, f (x o ; h, k = 1 (f (x o ; h + k, h + k f (x o ; h, h f (x o ; k, k. Let us pause for a concrete example. Consider the case n = and write the variables as (x, y rather than (x 1, x. We continue to write (h 1, h for h and k = (k 1, k. Then we have while f(x, y = x 3 y, so that f = (3 x y, x 3 y, and f (x; h = 3 x y h + x 3 y k f (x o ; h, k = ( k 1 k ( 6xy 6x y 6x y x 3 In this example the matrix ( h1 = 6xy h 1 k 1 + 6x y(h k 1 + h 1 k + x 3 h k. h 3

4 Q = ( 6xy 6x y 6x y x 3 is the Hessian matrix. Note that it is symmetric., A particularly instructive, and useful example for our future work is given in the case that f is the quadratic function f(x = 1 i=1 a ij x i x j + b i x i + c, where a ij = a ji, b i, and c are given constants. In matrix form, we write i=1 where the n n-matrix is symmetric. f(x = 1 x A x + b x + c, For a given index k, the variable x k is repeated in pairs in the first term defining f, namely, when the index j = k and when the index i = k. (This is the reason for the factor of 1/ in the definition. So, for example, the derivative of the first term with respect to x 1 is 1 ( a 1j x j + a 1j x j. Hence, differentiating the expression for f with respect to x k we obtain ( f = 1 a ik x i x k + x k i=1 = a kj x j + b k, a kj x k x j + b k since a jk = a kj for all i by hypothesis. Clearly, from this last form we have also that x i x j f = a ij. It follows that the gradient, f and the Hessian Q are given by ( f(x = a 1j x j + b 1,, a nj x j + b n = Q = (a ij. ( a 1j x j,, a nj x j + b and 4

5 or f(x = Ax + b Q = A. Note, in particular that if ϕ(x := x x then ϕ(x = x and if ν(x := x then ν(x = x/ x. Now, the Hessian that appears in the above formula is a symmetric matrix, and for such matrices we have the following definition. Definition 1.1 An n n symmetric matrix, Q, is said to be positive semi-definite provided, for all x R n, x, Qx. The matrix Q is said to be positive definite provided for all x R n, x, x, Qx >. We emphasize that the notions of positive definite and positive semi-definite are defined only for symmetric matrices. It is important in the theory of optimization to interpret the differentials of f as directional derivatives. We recall that, we begin with a fixed unit vector (or direction û R n and a real-valued function f defined and continuous in a neighborhood of the point x o. We assume that the neighborhood is convex i.e. that for any two points in the neighborhood, the line segment joining the two points is completely contained in the neighborhood. Then the directional derivative of f at x o in the direction û is defined to be (Dûf(x o f(x o + tû f(x o := lim. t t As a simple example, consider f(x, y := x + 3x y and let x o = (,. Let û = (1/, 1/. Then In R n it is easy to see that a δ-neighborhood is such a set. 5

6 x o + t û = f(x o + t û = ( + t, t ( + t ( t ( t = 4 t t and so (Dûf(x o = lim t ( t t 4 t =. Now, if the function f : R n R is of class C (k on S such that for some δ >, y := x + tû S, δ < t < δ then the function ϕ(t := f(x + t û = f(x 1 + tu 1, x + tu,, x n + tu n, δ < t < δ, is of class D (k in t on S. If, in particular, f C (1 the by the chain rule for differentiation, we have ϕ (t = f x 1 (x o + tûu f x n (x o + tûu n, and we have ϕ ( = f (x o, û. We often write d dt f(x + t û = f (x o ; û. t= This shows how to compute the directional derivative since f (x o ; û = f(x o û. In the simple example given above, since f(x, y = x +3xy, we have f = (x+3y, 3x so that (Dûf(x o = (1/, 1/ (4, 6 = 4/ 6/ = as before. Clearly, if û = e j, the standard j th unit vector, then we recover the usual j th partial derivative. We now turn to the multidimensional analog of Taylor s formula. Again, we assume that f is a real-valued function defined on an open set S. Then, if x S we can choose h so that, for every t, t 1, the line segment parameterized by y(t = x + th lies in S. Then Taylor s formula for f can be derived from the the Taylor formula for the function ϕ(t := f(x + th. Indeed ϕ : [, 1] R and, if f C (1, we have ϕ(1 = f(x + h, ϕ( = f(x, and ϕ (θ = f (x + θh; h. Using the Mean Value Theorem for functions of one variable, we have that there is a number θ 1, < θ 1 < 1 such that 6

7 1 ϕ(1 = ϕ( + ϕ (θ 1 = ϕ( + ϕ (θ dθ, the second relation holding by integration. In terms of the original function f we then have what we call the first-order Taylor expansion f(x + h = f(x + f (x + θ 1 h; h = f(x + f (x + θh; h dθ. 1 Now, if the function f C ( then so is the function ϕ and we have, using the second-order Taylor expansion for a function of one variable ϕ(1 = ϕ( + ϕ ( + ( 1 ϕ (θ, where θ (, 1. Or, in terms of the integral remainder term, ϕ(1 = ϕ( + ϕ ( + 1 (1 θ ϕ (θ dθ. 3 In terms of the original function f, we have f(x + h = f(x + f (x; h + = f(x + f (x; h + ( 1 f (x + θ h; h 1 f (x + θh; h dθ. Now, if we write r (x, h := 1 (1 θ [f (x + θh; h f (x; h] dθ, 3 To derive this form, start with the first order Taylor expansion for ϕ with integral remainder and integrate by parts. 7

8 we can write the second order Taylor expansion in the form f(x + h = f(x + f (x; h + ( 1 f (x h + r (x, h. 8

MA102: Multivariable Calculus

MA102: Multivariable Calculus MA102: Multivariable Calculus Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati Differentiability of f : U R n R m Definition: Let U R n be open. Then f : U R n R m is differentiable

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Functions of Several Variables

Functions of Several Variables Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Functions of Several Variables We now generalize the results from the previous section,

More information

MAT 473 Intermediate Real Analysis II

MAT 473 Intermediate Real Analysis II MAT 473 Intermediate Real Analysis II John Quigg Spring 2009 revised February 5, 2009 Derivatives Here our purpose is to give a rigorous foundation of the principles of differentiation in R n. Much of

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised)

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised) Computational Optimization Mathematical Programming Fundamentals 1/5 (revised) If you don t know where you are going, you probably won t get there. -from some book I read in eight grade If you do get there,

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Math 497C Mar 3, Curves and Surfaces Fall 2004, PSU

Math 497C Mar 3, Curves and Surfaces Fall 2004, PSU Math 497C Mar 3, 2004 1 Curves and Surfaces Fall 2004, PSU Lecture Notes 10 2.3 Meaning of Gaussian Curvature In the previous lecture we gave a formal definition for Gaussian curvature K in terms of the

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Vector Calculus, Maths II

Vector Calculus, Maths II Section A Vector Calculus, Maths II REVISION (VECTORS) 1. Position vector of a point P(x, y, z) is given as + y and its magnitude by 2. The scalar components of a vector are its direction ratios, and represent

More information

Chapter 5 Elements of Calculus

Chapter 5 Elements of Calculus Chapter 5 Elements of Calculus An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Sequences and Limits A sequence of real numbers can be viewed as a set of numbers, which is often also denoted as

More information

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018 Gradients and Directional Derivatives Brett Bernstein CDS at NYU January 21, 2018 Brett Bernstein (CDS at NYU) Recitation 1 January 21, 2018 1 / 23 Initial Question Intro Question Question We are given

More information

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR Appendix B The Derivative B.1 The Derivative of f In this chapter, we give a short summary of the derivative. Specifically, we want to compare/contrast how the derivative appears for functions whose domain

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

Matrix Algebra & Elementary Matrices

Matrix Algebra & Elementary Matrices Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below

More information

Vectors, metric and the connection

Vectors, metric and the connection Vectors, metric and the connection 1 Contravariant and covariant vectors 1.1 Contravariant vectors Imagine a particle moving along some path in the 2-dimensional flat x y plane. Let its trajectory be given

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Kinematics of fluid motion

Kinematics of fluid motion Chapter 4 Kinematics of fluid motion 4.1 Elementary flow patterns Recall the discussion of flow patterns in Chapter 1. The equations for particle paths in a three-dimensional, steady fluid flow are dx

More information

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version Chapter 0: Some revision of M1P1: Limits and continuity This chapter is mostly the revision of Chapter

More information

Transformations from R m to R n.

Transformations from R m to R n. Transformations from R m to R n 1 Differentiablity First of all because of an unfortunate combination of traditions (the fact that we read from left to right and the way we define matrix multiplication

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

Section Taylor and Maclaurin Series

Section Taylor and Maclaurin Series Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Directional Derivative and the Gradient Operator

Directional Derivative and the Gradient Operator Chapter 4 Directional Derivative and the Gradient Operator The equation z = f(x, y) defines a surface in 3 dimensions. We can write this as z f(x, y) = 0, or g(x, y, z) = 0, where g(x, y, z) = z f(x, y).

More information

The Calculus of Vec- tors

The Calculus of Vec- tors Physics 2460 Electricity and Magnetism I, Fall 2007, Lecture 3 1 The Calculus of Vec- Summary: tors 1. Calculus of Vectors: Limits and Derivatives 2. Parametric representation of Curves r(t) = [x(t), y(t),

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Lecture Notes on Metric Spaces

Lecture Notes on Metric Spaces Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],

More information

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method 446 CHAP. 8 NUMERICAL OPTIMIZATION Newton's Search for a Minimum of f(xy) Newton s Method The quadratic approximation method of Section 8.1 generated a sequence of seconddegree Lagrange polynomials. It

More information

VCE. VCE Maths Methods 1 and 2 Pocket Study Guide

VCE. VCE Maths Methods 1 and 2 Pocket Study Guide VCE VCE Maths Methods 1 and 2 Pocket Study Guide Contents Introduction iv 1 Linear functions 1 2 Quadratic functions 10 3 Cubic functions 16 4 Advanced functions and relations 24 5 Probability and simulation

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu February

More information

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche urdue University While the data (computed or measured) used in visualization is only available in discrete form, it typically corresponds

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Physics 411 Lecture 7. Tensors. Lecture 7. Physics 411 Classical Mechanics II

Physics 411 Lecture 7. Tensors. Lecture 7. Physics 411 Classical Mechanics II Physics 411 Lecture 7 Tensors Lecture 7 Physics 411 Classical Mechanics II September 12th 2007 In Electrodynamics, the implicit law governing the motion of particles is F α = m ẍ α. This is also true,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Mathematical Economics: Lecture 9

Mathematical Economics: Lecture 9 Mathematical Economics: Lecture 9 Yu Ren WISE, Xiamen University October 17, 2011 Outline 1 Chapter 14: Calculus of Several Variables New Section Chapter 14: Calculus of Several Variables Partial Derivatives

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

2.20 Fall 2018 Math Review

2.20 Fall 2018 Math Review 2.20 Fall 2018 Math Review September 10, 2018 These notes are to help you through the math used in this class. This is just a refresher, so if you never learned one of these topics you should look more

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:

More information

FALL 2018 MATH 4211/6211 Optimization Homework 1

FALL 2018 MATH 4211/6211 Optimization Homework 1 FALL 2018 MATH 4211/6211 Optimization Homework 1 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution

More information

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

SIMPLE MULTIVARIATE OPTIMIZATION

SIMPLE MULTIVARIATE OPTIMIZATION SIMPLE MULTIVARIATE OPTIMIZATION 1. DEFINITION OF LOCAL MAXIMA AND LOCAL MINIMA 1.1. Functions of variables. Let f(, x ) be defined on a region D in R containing the point (a, b). Then a: f(a, b) is a

More information

2.3 Terminology for Systems of Linear Equations

2.3 Terminology for Systems of Linear Equations page 133 e 2t sin 2t 44 A(t) = t 2 5 te t, a = 0, b = 1 sec 2 t 3t sin t 45 The matrix function A(t) in Problem 39, with a = 0 and b = 1 Integration of matrix functions given in the text was done with

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

MA1032-Numerical Analysis-Analysis Part-14S2- 1 of 8

MA1032-Numerical Analysis-Analysis Part-14S2-  1 of 8 MA1032-Numerical Analysis-Analysis Part-14S2-www.math.mrt.ac.lk/UCJ-20151221-Page 1 of 8 RIEMANN INTEGRAL Example: Use equal partitions of [0,1] to estimate the area under the curve = using 1. left corner

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Permutations and Polynomials Sarah Kitchen February 7, 2006

Permutations and Polynomials Sarah Kitchen February 7, 2006 Permutations and Polynomials Sarah Kitchen February 7, 2006 Suppose you are given the equations x + y + z = a and 1 x + 1 y + 1 z = 1 a, and are asked to prove that one of x,y, and z is equal to a. We

More information

Module-3: Kinematics

Module-3: Kinematics Module-3: Kinematics Lecture-20: Material and Spatial Time Derivatives The velocity acceleration and velocity gradient are important quantities of kinematics. Here we discuss the description of these kinematic

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives Machine Learning Brett Bernstein Recitation 1: Gradients and Directional Derivatives Intro Question 1 We are given the data set (x 1, y 1 ),, (x n, y n ) where x i R d and y i R We want to fit a linear

More information

Optimization and Calculus

Optimization and Calculus Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

G H. Extended Unit Tests A L L. Higher Still Advanced Higher Mathematics. (more demanding tests covering all levels) Contents. 3 Extended Unit Tests

G H. Extended Unit Tests A L L. Higher Still Advanced Higher Mathematics. (more demanding tests covering all levels) Contents. 3 Extended Unit Tests M A T H E M A T I C S H I G H E R Higher Still Advanced Higher Mathematics S T I L L Extended Unit Tests A (more demanding tests covering all levels) Contents Extended Unit Tests Detailed marking schemes

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016 Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample

More information

Math 162: Calculus IIA

Math 162: Calculus IIA Math 62: Calculus IIA Final Exam ANSWERS December 9, 26 Part A. (5 points) Evaluate the integral x 4 x 2 dx Substitute x 2 cos θ: x 8 cos dx θ ( 2 sin θ) dθ 4 x 2 2 sin θ 8 cos θ dθ 8 cos 2 θ cos θ dθ

More information

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0

More information

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v 260) Contents 2 Partial Derivatives 1 21 Limits and Continuity 1 22 Partial Derivatives 5 23 Directional Derivatives and the Gradient

More information

INTRODUCTION TO REAL ANALYTIC GEOMETRY

INTRODUCTION TO REAL ANALYTIC GEOMETRY INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E

More information

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form Second Order ODEs Second Order ODEs In general second order ODEs contain terms involving y, dy But here only consider equations of the form A d2 y dx 2 + B dy dx + Cy = 0 dx, d2 y dx 2 and F(x). where

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

2.10 Saddles, Nodes, Foci and Centers

2.10 Saddles, Nodes, Foci and Centers 2.10 Saddles, Nodes, Foci and Centers In Section 1.5, a linear system (1 where x R 2 was said to have a saddle, node, focus or center at the origin if its phase portrait was linearly equivalent to one

More information

ISOMETRIES AND THE LINEAR ALGEBRA OF QUADRATIC FORMS.

ISOMETRIES AND THE LINEAR ALGEBRA OF QUADRATIC FORMS. ISOMETRIES AND THE LINEAR ALGEBRA OF QUADRATIC FORMS. Please review basic linear algebra, specifically the notion of spanning, of being linearly independent and of forming a basis as applied to a finite

More information

TEST CODE: MIII (Objective type) 2010 SYLLABUS

TEST CODE: MIII (Objective type) 2010 SYLLABUS TEST CODE: MIII (Objective type) 200 SYLLABUS Algebra Permutations and combinations. Binomial theorem. Theory of equations. Inequalities. Complex numbers and De Moivre s theorem. Elementary set theory.

More information

The fundamental theorem of calculus for definite integration helped us to compute If has an anti-derivative,

The fundamental theorem of calculus for definite integration helped us to compute If has an anti-derivative, Module 16 : Line Integrals, Conservative fields Green's Theorem and applications Lecture 47 : Fundamental Theorems of Calculus for Line integrals [Section 47.1] Objectives In this section you will learn

More information

Math 234. What you should know on day one. August 28, You should be able to use general principles like. x = cos t, y = sin t, 0 t π.

Math 234. What you should know on day one. August 28, You should be able to use general principles like. x = cos t, y = sin t, 0 t π. Math 234 What you should know on day one August 28, 2001 1 You should be able to use general principles like Length = ds, Area = da, Volume = dv For example the length of the semi circle x = cos t, y =

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012 NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

MATH529 Fundamentals of Optimization Unconstrained Optimization II

MATH529 Fundamentals of Optimization Unconstrained Optimization II MATH529 Fundamentals of Optimization Unconstrained Optimization II Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 31 Recap 2 / 31 Example Find the local and global minimizers

More information

(x x 0 ) 2 + (y y 0 ) 2 = ε 2, (2.11)

(x x 0 ) 2 + (y y 0 ) 2 = ε 2, (2.11) 2.2 Limits and continuity In order to introduce the concepts of limit and continuity for functions of more than one variable we need first to generalise the concept of neighbourhood of a point from R to

More information

CITY UNIVERSITY LONDON. BEng (Hons) in Electrical and Electronic Engineering PART 2 EXAMINATION. ENGINEERING MATHEMATICS 2 (resit) EX2003

CITY UNIVERSITY LONDON. BEng (Hons) in Electrical and Electronic Engineering PART 2 EXAMINATION. ENGINEERING MATHEMATICS 2 (resit) EX2003 No: CITY UNIVERSITY LONDON BEng (Hons) in Electrical and Electronic Engineering PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2003 Date: August 2004 Time: 3 hours Attempt Five out of EIGHT questions

More information

are continuous). Then we also know that the above function g is continuously differentiable on [a, b]. For that purpose, we will first show that

are continuous). Then we also know that the above function g is continuously differentiable on [a, b]. For that purpose, we will first show that Derivatives and Integrals Now suppose f : [a, b] [c, d] IR is continuous. It is easy to see that for each fixed x [a, b] the function f x (y) = f(x, y) is continuous on [c, d] (note that f x is sometimes

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX 100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX.1 Norms If we have an approximate solution at a given point and we want to calculate the absolute error, then we simply take the magnitude

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Caculus 221. Possible questions for Exam II. March 19, 2002

Caculus 221. Possible questions for Exam II. March 19, 2002 Caculus 221 Possible questions for Exam II March 19, 2002 These notes cover the recent material in a style more like the lecture than the book. The proofs in the book are in section 1-11. At the end there

More information

g(t) = f(x 1 (t),..., x n (t)).

g(t) = f(x 1 (t),..., x n (t)). Reading: [Simon] p. 313-333, 833-836. 0.1 The Chain Rule Partial derivatives describe how a function changes in directions parallel to the coordinate axes. Now we shall demonstrate how the partial derivatives

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Overview of vector calculus. Coordinate systems in space. Distance formula. (Sec. 12.1)

Overview of vector calculus. Coordinate systems in space. Distance formula. (Sec. 12.1) Math 20C Multivariable Calculus Lecture 1 1 Coordinates in space Slide 1 Overview of vector calculus. Coordinate systems in space. Distance formula. (Sec. 12.1) Vector calculus studies derivatives and

More information