Optimization, for QEM-IMAEF-MMEF, this is the First Part (september-november). For MAEF, it corresponds to the whole course of the semester.

Size: px
Start display at page:

Download "Optimization, for QEM-IMAEF-MMEF, this is the First Part (september-november). For MAEF, it corresponds to the whole course of the semester."

Transcription

1 Paris. Optimization, for QEM-IMAEF-MMEF, this is the First Part (september-november). For MAEF, it corresponds to the whole course of the semester. (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.

2 Why optimization? Some examples: An individual optimizes his consumption basket at the supermarket (this is a behavioural model!) A social planner optimizes when takes some decisions: minimize inequalities, maximize total well being. A firm optimizes when takes some decisions: minimize inequalities, maximize total well being. Nature optimizes: brachistochrone curve. Student optimizes: maximize grade minimizing work. In finance: maximizing return, minimizing risk.

3 Prerequisite. Section 1: real function of one variable. In this course, we will be interested by real functions, that is functions f : D R. The variable in the first set D is often denoted x, and f (x) (often denoted y = f (x)), in the second set, is the image of x by f. A real function of one variable is a function f : D R, where D R. The domain (or definition set) of such function is {x : f (x) is well defined }.

4 Prerequisite. Section 1: real function of one variable. The image of a real function f : D R, denoted f (D), is f (D) = {y R : x D : y = f (x)}. You have to know what is the limit (if it exists) l = lim x x f (x), defined by ε > 0, η > 0 : x D, x x < η f (x) l < ε. The right limit (if it exists) l + = lim x x + f (x) is defined by ε > 0, η > 0 : x D, x x < x + η f (x) l + < ε. The left limit (if it exists) l = lim x x f (x) is defined by ε > 0, η > 0 : x D, x η < x x f (x) l < ε.

5 Prerequisite. Section 1: real function of one variable. Such definition of limit is valid for finite limits at finite points. Otherwise, one can also define lim x x f (x) = + by X > 0, η > 0 : x D, x x < η f (x) > X. And also lim x + f (x) = l by ε > 0, X > 0 : x D, x > X, f (x) l + < ε. And in a similar way we can define all the other cases (limit at a finite point, at +, at, etc etc...)

6 Prerequisite. Section 1: real function of one variable. You have to know also what is a sequence (u n ) (particular function from N to R!) and what is the limit l = lim n + u n, defined by ε > 0, N > 0 : n N, u n l < ε. And similarly + = lim n + u n, defined by X R, N > 0 : n N, u n > X. And finally = lim n + u n, defined by X R, N > 0 : n N, u n < X.

7 Prerequisite. Section 1: function of one variable. You have to know what is a subsequence (u φ(n) ) n N of a sequence (u n ) n N, where φ : N N is strictly increasing. In practice, you have to know geometric sequence (u n+1 = qu n for every n 0, then u n = (q n )u 0 ), arithmetic sequences (u n+1 = r + u n for every n 0, then u n = nr + u 0 ), their limits (depending on r, q,...). In practice to find limits: i) Try basic rules (sum, multiplication,...) ii) Try to majorize. iii) Use monotony. iv) For functions, use continuity (see after!) if known.

8 Prerequisite. Section 1: function of one variable. Exercise 1 Find the limit of u n = ( 1 2 )2 + n, v n = ( n)( 1 2 )2, w n = ln(n) n 2. Exercise 2 Find the limit, right-limit, left limit of the following functions f, g and h at 0 f (x) = x if x < 0 and f (x) = x 2 if x 0. g(x) = x 3 ln(x). h(x) = 1 if x < 0 and h(x) = x + 2 if x 0. Exercise 3 i) Find a sequence which has no limit, finite or not. ii) Find two sequences (u n ) and (v n ) with no finite limit, such that (u n + v n ) has a finite limit.

9 Prerequisite. Section 1: function of one variable. Important properties a function can satisfy (or not): 1) Continuity. A real function f is continuous at x D if ε > 0, η > 0 : x D, x x η f (x) f ( x) ε. Equivalently, lim x x f (x) = f ( x). An intuitive definition: for every x D, small perturbations of x in D cannot create too large perturbations of f ( x). In practice, to prove continuity, basic rules (sum, quotient, difference, multiplication, composition of continuous functions remains continuous if well defined!) Sequential criterium of continuity: A real function f is continuous at x D if and only if for every sequence (u n ) converging to x, the sequence (f (u n )) converges to f ( x).

10 Prerequisite. Section 1: function of one variable. 2) Usual functions. In practice, you have to know that the following functions are continuous on their definition sets: linear functions, affine functions, polynomials, e x, ln(x), cos(x), sin(x), a x ), and that usual operations on continuous function (sum, product, division, exponent) are continuous on their definition sets. Sometimes, functions are defined by continuous piecewise functions. To check they are continuous (or discontinuous), look for left limit or right limit at thresholds points of the different pieces, and use the fact that f is continuous at x if its left-limit and right-limits coincide at x.

11 Prerequisite. Section 1: function of one variable. The graph (denoted G(f )) of a real function f : D R is G(x) = {(x, f (x)) : x D}. A real function f has a derivative (denoted f ( x)) at x if the following limit exists f f ( x + ε) f ( x) ( x) = lim. ε 0 ε Geometrically it corresponds to the slope of the tangent line to G(f ) at ( x, f ( x)), whose equation is y = (x x)f ( x) + f ( x). If f (x) exists for every x D, we can define the derivative function f : D R. f is said C 0 on D if it s continuous on D, f is said C 1 on D if f exists on D and is continuous on D...we can continue like that C 2, C 3,... when it s possible. The condition "to be C k on D" is more and more restrictive: if f is C 1, it s C 0, if f is C 2, it s C 1, etc etc...but the converse can be false. example.

12 Prerequisite. Section 1: function of one variable. Other important properties of a function f : R R: increasing, decreasing (definitions...). f is increasing on D R if for every (x, y) D 2 : x y, we have f (x) f (y). f is strictly increasing on D R if for every (x, y) D 2 : x < y, we have f (x) < f (y). f is decreasing on D R if for every (x, y) D 2 : x y, we have f (x) f (y). f is strictly decreasing on D R if for every (x, y) D 2 : x < y, we have f (x) > f (y).

13 Prerequisite. Section 1: function of one variable. Be carefull: not to be increasing does not mean to be decreasing! and monotony depends on the domain you consider. In practice, the derivative, when it exists, is a practical tool to study monotony of f. if for every x ]a, b[, f (x) > 0, the f is strictly increasing on ]a, b[, if for every x ]a, b[, f (x) 0, the f is increasing on ]a, b[, if for every x ]a, b[, f (x) < 0, the f is strictly decreasing on ]a, b[, if for every x ]a, b[, f (x) 0, the f is decreasing on ]a, b[, Question: if if for every x ]0, 1[ ]1, 2[, f (x) > 0, is f is strictly increasing on ]0, 1[ ]1, 2[? NO!

14 Prerequisite. Section 2: function of several variables. In this course, we will also be often interested in functions f : D R, where D R n. The variable (often denoted x = (x 1,..., x n )) in D is now a vector, and f (x 1,..., x n ) (often denoted y = f (x)), in the second set, is, again, a real. Again, f (x 1,..., x n ) is not always well defined, and we note D(f ) R n the "definition" set of f (or domain), that is... Important to be able to generalize interval ]a, b[, [a, b], etc...definition of euclidean distance, of open (euclidean )balls, of closed (euclidean) balls, of (euclidean) scalar product <.,. > on R n :

15 Prerequisite. Section 2: function of several variables. k k=1 (y k x k ) 2 < r} where x = (x 1,..., x n ) B O (x, r) = {y R n : is the center of the (open) ball, and r 0 the radius. k k=1 (y k x k ) 2 r} where x = (x 1,..., x n ) B C (x, r) = {y R n : is the center of the (closed) ball, and r 0 the radius. k d(x, y) = k=1 (y k x k ) 2 is the (euclidean) distance between x = (x 1,..., x n ) and y = (y 1,..., y n ). < x, y >= n k=1 x ky k is the euclidean scalar product between x and y. x = k k=1 x2 k is the euclidean norm of x.

16 Prerequisite. Section 2: function of several variables. A function f is continuous at x R n if ε > 0, η > 0 : x D, d(x, x) < η f (x) f ( x) ε. This is also equivalent to lim x x f (x) = f ( x), where all the limits are defined as previously, but where we use the euclidean distance to translate that x and x should be closed from each other. For example: l = lim x x f (x) is defined by ε > 0, η > 0 : x D, d(x, x) < η f (x) l < ε. l = lim x + f (x) is defined by ε > 0, X > 0 : x D, x > X f (x) l < ε.

17 Prerequisite. Section 2: function of several variables. In practice, you have to know that multivariable polynomials are continuous functions, that composition, sum, substraction, division, keeps continuity if we remain on the definition set of the functions. There could be problems if the function is defined piecewise.

18 Prerequisite. Section 2: function of several variable. The graph for a function f : D R (where D R n ) is defined as before. A function f has a partial derivative with respect to the ith variable at x R n (denoted f x i ( x))if the following limit exists f f ( x + εe i ) f ( x) ( x) = lim x i ε 0 ε where e i = (0, 0,..., 0, 1, 0,..., 0) (only one 1 at position i). This permits to define partial derivative functions. For example x f x i (x). Such function can sometimes admit a partial derivative with respect to variable x i (then denoted f 2 (x)) or with respect to j i, then denoted f 2 x j x i (x), called second partial derivatives. x 2 i

19 Prerequisite. Section 2: function of several variable. If the partial derivatives exist for every i = 1,..., n at x = (x 1,..., x n ) R n, we define the gradient of f at x to be the vector x f = ( f x 1 (x),..., f x n (x)). A multivariable real function f is said C 0 on R n if it s continuous, f is said C 1 on R n if all partial derivatives exists and are continuous.we can continue like that: C 2, C 3,... when it s possible.

20 Prerequisite. Section 2: function of several variable. The hessian of a C 2 function at x is the matrix 2 f 2 f x 2 x 1 1. x f 2 f Hess x(f ) = x 2. x 1... x f x n. x f x n. x 1 2 f x 1. x n 2 f x 2. x n 2 f x 2 n. Schwarz theorem: When f is C 2 on D, Hess x(f ) is symmetric for every x D. Local first order approximation of a C 1 function on D: where ε : R R tends to zero at 0. f (x + h) = f (x)+ < f x, h > + h ε( h ) Local second order approximation of a C 2 function on D: f (x + h) = f (x)+ < f x, h > + t hhess x(f )h + h 2 ε( h ) where ε : R R tends to zero at 0. Here, t hhess x(f )h denotes the multiplication of the 1 n matrix t h (the transposition of h), the n n matrix Hess x(f ) and the n 1 matrix h whose component are those of h.

21 Prerequisite. Section 3: open, closeness, compactness,... We consider on R n the euclidean distance, the associated euclidean norm and euclidean scalar product. C R n is open if for every x C there exists r > 0 such that B O (x, r) C. F R n is closed if for every sequence (x n ) in F which converges to x R n, we should have x F. In practice, often easier to use the following criteria: If C = f 1 (I) := {x R n : f (x) I} where f is a continuous mapping from R n to R and I an open subset of R (for example open interval) then C open. If C = f 1 (I) := {x R n : f (x) I} where f is a continuous mapping from R n to R and I a closed subset of R (for example closed interval) then C closed. The complement of an open set is closed, the complement of an open set is closed.

22 Prerequisite. Section 3: open, closeness, compactness,... x is said to be interior to C R n if there exists r > 0 such that B O (x, r) C. The set of interior points of C is denoted by int(c). x is said to be in the closure of C R n if x is the limit of some sequence of points of C. The set of points in the closure of C, called closure of C, is denoted by C. x is said to be in the boundary of C R n if x is in the closure of C but not in the interior of C. The boundary of C (set of points in the boundary) is denoted by C.

23 Prerequisite. Section 3: open, closeness, compactness,... A subset C of R n is said to be bounded if there exists M > 0 such that for every x C, x M. A subset C of R n is said to be compact if it is closed and bounded. Equivalent to : every sequence of C admits a convergent subsequence.

24 Prerequisite. Section 4: Basic facts about convex functions and convex subsets. C R n is convex if for every (x, y) C 2 and λ [0, 1], λx + (1 λ)y C (please do a graphic, and interpret in terms of segment, and holes!). Basic operations: intersection, product keeps convexity, but not union in general (do graphics!) Be carefull: no natural notion of concavity for sets! Let C R n be convex. A function f : C R is said to be convex if (x, y) C 2, λ [0, 1], f (λx+(1 λ)y) λf (x)+(1 λ)f (y) C. Please (graphic!), interpretation in terms of convexity of the set {(x, y) C R : f (x) y}. The function f : C R is said to be strictly convex if (x, y) C 2, x y, λ ]0, 1[, f (λx + (1 λ)y) < λf (x) + (1 λ)f (y) C.

25 Prerequisite. Section 5: convexity, concavity,... Let C R n be convex. A function f : C R is said to be concave if (x, y) C 2, λ [0, 1], f (λx+(1 λ)y) λf (x)+(1 λ)f (y) C. Please (graphic!), interpretation in terms of convexity of the set {(x, y) C R : f (x) y}. The function f : C R is said to be strictly concave if (x, y) C 2, x y, λ ]0, 1[, f (λx + (1 λ)y) > λf (x) + (1 λ)f (y) C.

26 Prerequisite. Section 5: convexity, concavity,... If f :]a, b[ R is C 2 and f (x) 0 for every x ]a, b[, then f is convex on ]a, b[ If f :]a, b[ R is C 2 and f (x) > 0 for every x ]a, b[, then f is strictly convex on ]a, b[ If f :]a, b[ R is C 2 and f (x) 0 for every x ]a, b[, then f is concave on ]a, b[ If f :]a, b[ R is C 2 and f (x) < 0 for every x ]a, b[, then f is strictly concave on ]a, b[ Be carefull: 1 x has a strictly positive second derivative on R {0}, but is not convex: why?

27 Prerequisite. Section 6: Basic facts about supremum, infimum. The real m R is a lower bound (or minorant) of S R if s S, m s. Not unique. May not exist. If it exists, S is said to be bounded below. The real M R is an upper bound (or majorant) of S R if s S, s M. Not unique. May not exist. If it exists, S is said to be bounded above. The set S R is bounded if it is bounded below and above. In practice...

28 Prerequisite. Section 6: Basic facts about supremum, infimum. The real M R is the greatest element of S R if it is an upper bound of S and if M S. may not exist. Unique if exists. The real m R is the least element of S R if it is a lower bound of S and if m S. May not exist. Unique if exists. The supremum of S R, denoted sup S, is the least majorant of S. may not exist. Exists when S is bounded above. The infimum of S R, denoted inf S, is the greatest minorant of S. may not exist. Exists when S is bounded below.

29 Prerequisite. Section 6: Basic facts about supremum, infimum. In practice, sup S (if exists) is the only majorant of S that is a limit of a sequence (s n ) of points of S. In practice, inf S (if exists) is the only minorant of S that is a limit of a sequence (s n ) of points of S. Examples: sup]a, b[= b, sup]a, b] = b but in this last case we also denote max]a, b] = b, because the supremum belongs to the set. Examples: inf]a, b[= a, inf[a, b[= a but in this last case we also denote min[a, b[= a, because the infimum belongs to the set.

30 Paris. We now begin really optimization! (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2017.

31 Motivation Optimization is every where in Economics, Social sciences, Finance,... Social networks: what it the good target if you want to be friends with many people? What it the effect on the shape of the network of the maximization behaviour of each person? Finance: Maximizing returns, minimizing risk... Economics: maximizing intertemporal profit, trade off between invest (for future) and consume (for now!)!

32 Chapter 1: Optimization (vocabulary) Let f : E R, and denote D(f ) the domain of f. Let C D(f ). Consider the maximisation and minimization problems: (P) max x C f (x). (Q) min x C f (x). f is the objective function, C the set of feasible points (or set of constraints). The value of (P) (resp. Q) is Val(P) = sup f (x) x C Val(Q) = inf x C f (x). By convention, we will write Val(P) = + when f is not bounded above on C, and Val(Q) = when f is not bounded below on C.

33 Chapter 1: Optimization (vocabulary) Just recall that f : E R is bounded below on C E if there exists m R (called a minorant of f on C) if x C, m f (x). If this condition is true, the infimum of f on C, denoted inf x C f (x), is a real and is the greatest minorant of f on C. In practice, α = inf x C f (x) is the only real that satisfies: (i) x C, α f (x). (ii) (x n ) n IN sequence of C such that lim f (x n) = α. n +

34 Chapter 1: Optimization (vocabulary) Just recall that f : E R is bounded above on C E if there exists M R (called a majorant of f on C) if x C, M f (x). If this condition is true, the supremum of f on C, denoted sup x C f (x), is a real and is the lowest majorant of f on C. In practice, β = sup x C f (x) is the only real that satisfies: (i) x C, β f (x). (ii) (x n ) n IN sequence of C such that lim f (x n) = β. n +

35 Chapter 1: Optimization (vocabulary) Let f : E R, and denote D(f ) the domain of f. Let C D(f ). x E is a solution of (P) if of (P) max x C f (x) if val(p) = f ( x) x E is a solution of if val(q) = f ( x) (Q) min x C f (x)

36 Chapter 1: Optimization (vocabulary) To define local solution, we now assume D(f ) IR n for some n. x E is a local solution of (P) max x C f (x) if ε > 0, x B( x, ε), f (x) f ( x) x E is a local solution of (Q) min x C f (x) if ε > 0, x B( x, ε), f (x) f ( x)

37 Chapter 1: Optimization (vocabulary) Let (P) max x C f (x). A maximizing sequence for (P) is any sequence (x n ) of C such that the sequence (f (x n )) converges to Val(P). Let (Q) min x C f (x). A minimizing sequence for (P) is any sequence (x n ) of C such that the sequence (f (x n )) converges to Val(Q). Theorem There always exists a maximizing sequence or a minimizing sequence.

38 Chapter 1: Exercise Consider (P) max x C e x x. Find value, solution, maximizing sequence when C = [1, 2], when C = [0, 10], when C = [0, + [. Consider e x (Q) min x C x. Find value, solution, local solution, when C = (, 0) (0, + ).

39 Chapter 2: Existence of solutions of optimization problem Section 0: case of f : D R, D R. For C D, consider the problem (P) min x C f (x) In this case, if possible, simply study the variations of f on C, which permits to get local maxima or local minima, and compare with values (or limits) of f at boundary points of C. Example: (P) min x R x + 1 x For some problems with several variables, the constraint permits to eliminate variables and to get only one variable. Example : (P) min (x,y) [0,1] [0,1]:x+y=1 x2 + y 2

40 Chapter 2: Existence of solutions of optimization problem Section 1: Reminders Reminder 1 We also recall that K is said to be closed if for every sequence of K that converges, its limit has to belong to K. In practice, we often use the following criterium to prove closeness: Practical Criterium to prove closeness If C = f 1 (I) := {x R n : f (x) I} where f is a continuous mapping from R n to R and I a closed subset R, then C is a closed subset of R n. Reminder 2 We recall that a subsequence of (x n) is a sequence (x φ(n) ) where φ is a strictly increasing mapping from N to itself. Reminder 3 A compact subset K of R n is a closed and bounded subset of R n. To be compact is equivalent to "Every sequence (x n) of K admits a subsequence (x φ(n) ) which converges in K." (this property is called Bolzano-Weirstrass property).

41 Chapter 2: Existence of solutions of optimization problem Section 2: A first criterium for the existence of a solution of max or min problems in finite dimension Theorem 1 Let f : C R continuous, where C is a closed subset of R n. Assume one of the two following assumptions: (i) C is bounded. (ii) C is not bounded but f (x) tends to + when x + (this is the euclidean norm) (we say that f is coercive). Then (P) min x C f (x) has at least solution. Proof.

42 Chapter 2: Existence of solutions of optimization problem Section 2: A first criterium for the existence of a solution of max or min problems in finite dimension Theorem 2 Let f : C R continuous, where C is a closed subset of R n. Assume one of the two following assumptions: (i) C is bounded. (ii) C is not bounded but f (x) tends to when x + (this is the euclidean norm) (we say that f is coercive). Then (Q) max x C f (x) has at least solution.

43 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? a) distance First, in infinite dimension, we have to define what means "continuous", "closed",...which requires to be able to define what is a ball in infinite dimension. To define a distance is a general way to do that. Definition We say that d is a distance on a set E if d is a mapping from E E to R which satisfies the following properties: (i) (x, y) E E, d(x, y) 0. (ii) (x, y) E E, d(x, y) = 0 if and only if x = y. (iii) (x, y) E E, d(x, y) = d(y, x). (iv) (x, y, z) E E E, d(x, z) d(x, y) + d(y, z).

44 Exemples of distance on R, d(x, y) = x y. on R n, d 2 ((x 1,..., x n ), (y 1,..., y n )) = (x 1 y 1 ) (x n y n ) 2 Euclidean distance. on R n, d 1 ((x 1,..., x n ), (y 1,..., y n )) = x 1 y x n y n. on R n, d ((x 1,..., x n ), (y 1,..., y n )) = max{ x 1 y 1,..., x n y n }.

45 Exemples of distance discrete distance Question: what are closed, open subsets for this distance? What are continuous function f : X R where X is endowed with discrete metric?

46 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? a) distance Definition (metric space) If d is a distance on a set E, then we say that (E, d) is a metric space. A closed ball of center x E and radius r 0 is B C (x, r) := {y E : d(y, x) r}. An open ball of center x E and radius r 0 is B O (x, r) := {y E : d(y, x) < r}. This permits to define the standard notion of convergence of a sequence, continuity of a function, limit of functions,...(developed in the lecture).

47 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? b) Norm. Definition (norm) If E is a vector space, a norm. is an application from E to R + such that x E, x = 0 if and only if x = 0. x E, λ R, λx = λ x. Triangular inequality: (x, y) 2 E 2, x + y x + y. Important If. is a norm, d(x, y) = x y is a distance on E.

48 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? c) Scalar product. A scalar product on a vector space E is a function.,. from E E to R such that (i) For every x E, the mappings y E x, y and y E y, x are linear. (ii) For every (x, y) E E, x, y = y, x. (iii) For every x E, x, x 0. (iv) For every x E, x, x = 0 if and only if x = 0.

49 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? Hilbert spaces Order of generality: distance the most general. norm just after scalar product after. R n endowed with euclidean distance the less general!

50 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? Counterexample of Theorem 1 in infinite dimension Call lb 2 the set of real sequences (u n ) n 0 R N such that + k=0 u2 n 1, endowed with the following metric d((u n ) n N, (v n ) n N ) = + (u k v k ) 2. Define k=0 + f ((u n ) n N ) = (( u 2 k) 1) 2 + k=0 + k=0 u 2 k k + 1. Then lb 2 endowed with d is closed, bounded, f is continuous, but f has no minimum on lb 2. (See tutorial).

51 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? But if we re-inforce two things, we shall extend Theorem 1 and Theorem 2 to infinite dimension space. The first thing we have to add (for Theorem 1) is convexity of the set C and of the function f. The second thing is that we have to consider particular metric space, we call separable Hilbert space.

52 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? c) Scalar product. We say that a metric space (E,.,. ) is a Hilbert space if: (i).,. is a scalar product. (ii) E endowed with the metric d(x, y) = < x y, x y > is Complete, that is every Cauchy sequence for this distance is convergent for this distance. Condition ii) is always true in finite dimensional spaces.

53 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? Hilbert spaces Example: l 2 := {(u n ) n 0 real sequence : + k=0 u 2 n < + } endowed with (u n ) n 0, (v n ) n 0 = + k=0 u nv n. Counter-Example: C 0 ([a, b], R) the set of continuous functions from [a, b] to R endowed with f, g = b a f (x)g(x)dx is not a Hilbert space.

54 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? And what happens to the notion of basis in Hilbert space??? We say that a countable family (e k ) k N of elements of E, a Hilbert space, is a Hilbert basis (E, <.,. >) of a if it satisfies the two following properties: (i) It is orthonormal, that is e k = 1 for every k 0 and < e i, e j >= 0 for every i j. (ii) For every x H, there exists a real sequence (x n ) such that x = + k=0 x k e k which means that lim N + ( N k=0 x ke k ) x = 0. this is the natural extension of an orthonormal basis in finite dimensional spaces.

55 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? Two important things to know in Hilbert spaces: Hahn-Banach Theorem Projection theorem

56 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? projection Projection Theorem For every closed and convex subset C of a Hilbert space H, for every y H, there exists a unique projection P C (y) of y on C defined by P C (y) y = min y x. x C Proof. If C belong to a finite dimensional space, we do not need that H is hilbert, but only that it possesses a scalar product.

57 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? projection Important characterization of projection through some inequality: Characterization of Projection Consider a closed and convex subset C of a Hilbert space H. Let y H. Then z = P C (y) is completely characterized by: (i) z C, and: (ii) x C, y P C (y), x P C (y) 0 Exercise!!

58 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? separation In a Hilbert space H, a hyperplane K is defined by K = {x H : x, y = a} for some given a R and some given y H. Half-spaces (strict or not).

59 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? separation Hahn-Banach theorem (particular form): If H hilbert space, C closed convex subset of H and x 0 / C, then there exists a R and y H such that c C : c, y < a < x 0, y. Interpretation in terms of half-spaces. Picture. One can reverse the inequalities in this theorem without modifying the validity of the theorem!

60 Chapter 2: Existence of solutions of optimization problem Section 3: What happens in infinite dimension? Theorem 1 Let f : C R a continuous and convex function, where C is a closed convex subset of H, a Hilbert space with some Hilbert basis. Assume one of the two following assumptions: (i) C is bounded. (ii) C is not bounded but f (x) tends to + when < x, x > + ( we say that f is coercive). Then (P) min x C f (x) has at least solution (similar statement with minimum, replacing f (x) tends to + by f (x) tends to and convexity by concavity.) Remark: the theorem is in fact true for any Hilbert space.

61 Chapter 2: Existence of solutions of optimization problem Proof. Consider (x n ) n 0 a minimizing sequence. From assumption (C bounded or f coercive), the sequence (x n ) n 0 is bounded. Lemma (see Exercise 7 tutorial 1): It implies that there exists x H and a subsequence (x φ(n) ) n 0, now denoted (z n ) n 0, of (x n ) n 0 such that for every y H, (< y, z n >) n 0 is convergent to < y, x > (one says that the sequence (z n ) is weakly convergent to x ). Then, we want to prove that x is a solution of (P).

62 Chapter 2: Existence of solutions of optimization problem Proof. To prove that, let ε > 0. There exists N > 0 such that for n N, f (z n) inf c C f (c) + ε. Assume (contradiction proof) that f (x ) > inf c C f (c) + ε. Consider A = {c C : f (c) inf c C f (c) + ε}. This is convex and closed (same proof as in exercise 0 in Tutorial 1). For n N, z n A. But x / A. Thus, from Hahn-Banach theorem, we get the existence of y H and a real a such that for every n N. Passing to the limit we get a contradiction. y, x > a > y, z n y, x > a y, x Finally, from contradiction proof, f (x ) inf c C f (c) + ε, and since ε > 0 is arbitrary, we get f (x ) inf c C f (c).

63 Chapter 3: extension to l.s.c or s.c.c functions. Many functions in Economics, finance,... are naturally discontinuous...could we get an existence result of optimum for discontinuous functions? Upper semicontinuous function Let (E, d) be a metric space. f : E R is upper semicontinuous (u.s.c.) if for every λ R, the set {x E : f (x) λ} is a closed subset of E. Lower semicontinuous function Let (E, d) be a metric space. f : E R is lower semicontinuous (l.s.c.) if for every λ R, the set {x E : f (x) λ} is a closed subset of E.

64 Chapter 3: extension to l.s.c or s.c.c functions. Remark that if f : E R is l.s.c. and u.s.c. then it is continuous. Link, example.

65 Chapter 3: extension to l.s.c or s.c.c functions. All the previous theorem (Theorem 1,2 and 1 ) in Lecture 1 can be extended, replacing continuity assumption by lower semicontinuity for minimum problems, and uppersemicontinuity for maximum problems. For example: Theorem 1 Let f : C R lower semicontinuous, where C is a closed subset of R n. Assume one of the two following assumptions: (i) C is bounded. (ii) C is not bounded but f (x) tends to + when x + (this is the euclidean norm) (we say that f is coercive). Then (P) min x C f (x) has at least solution. Proof: see tutorial 2.

66 Chapter 4: Regularization. Moreau regularization Let f : R n R be lower semicontinuous. For every λ > 0, define f λ y x 2 (x) = inf y Rn(f (y) + ). 2λ Then i) For every x R n, f λ (x) f (x) ii) The function x f λ (x) is continuous. iii) For every x R n, f λn (x) f (x) if λ n 0+. Proof: Tutorial 2.

67 Chapter 5: Multivalued functions and Berge theorem. Consider the solution of a consumer maximization problem: V(p 1,..., p n, w) = max U(x 1,..., x n ) p 1 x p nx n w A natural question is: how move the value when the price p moves and when the wealth w moves.. Berge Theorem is an answer to this question.

68 Chapter 5: Multivalued functions and Berge theorem. Definition: multivalued mapping A multivalued function Φ from R n to R p is a function from R n to the set of subsets of R p. The graph of Φ is Gr(Φ) := {(x, y) R n R p : y Φ(x)}. Definition: continuous multivalued mapping The multivalued function Φ from R n to R p is said to be continuous if for every O open subset of R p, the sets {x R n : Φ(x) O} and {x R n : Φ(x) O } are both open subsets of R n.

69 Chapter 5: Multivalued functions and Berge theorem. Berge Theorem Consider a continuous multivalued function Φ from R n to R p with nonempty and compact values Φ(x) for every x R n. Let f : R n R p R a continuous function. Let called the value function, and m(x) = max f (x, y) y Φ(x) µ(x) = {y Φ(x) : f (x, y) = m(x)} the set of solutions of the maximization problem (as a function of x). Then i) The value function m is continuous. ii) The multivalued function µ has a closed graph (in particular it is continuous if µ is single-valued: see tutorial 2.)

70 Chapter 6: Tangent and normal cones to subsets of R n. Tangent Cone of subsets of R n. Let C a subset of R n and x C. The Tangent cone of C at x is the set, denoted T C ( x), of all "directions" d R n such that there exists a sequence of positive reals (ε n ) converging to 0 and a sequence (x n ) of elements of C converging to x, such that Property x n x lim = d. n + ε n Intuitively, this is the set of direction inward to C at x. Let C a subset of R n and x C. Then: i) T C ( x) is a closed cone (K cone means that for all λ 0, for every d K, λd K). ii) If x is interior to C, then T C ( x) = R n.

71 Chapter 6: Tangent and normal cones to subsets of R n. Normal Cone of subsets of R n Let C a subset of R n and x C. The normal cone of C at x is the set, denoted N C ( x), of d R n such that w T C ( x), w, d 0. Intuitively, this is the set of direction from x that quits C "rapidly".

72 Chapter 6: Tangent and normal cones to subsets of R n. Property Let C a subset of R n and x C. Then: i) N C ( x) is a closed convex cone. ii) If x is interior to C, then N C ( x) = {0}. iii) N C ( x) can be equivalently defined by i)+ii) : see tutorial Proof of iii) now. N C ( x) = {d R n : x C, x x, d 0.}

73 Chapter 7: First order necessary condition for optimum with Tangent cone. Recall: differentiability f : R n R is differentiable at x R n if we can write h = (h 1,..., h n ), f ( x + h) = f ( x) + f x.h + h ε(h) for some function ε : R n R which converges to 0 at 0. Interpretation: f admits some first order linear develoment. This is possible, for example, when f is C 1 (thus differentiabe is implied by C 1, the converse maybe false).

74 Chapter 7: First order necessary condition for optimum with Tangent cone. Interpretation of gradient From h = (h 1,..., h n ), f ( x + h) = f ( x) + f x.h + h ε(h) we interpret the gradient of f at x as the direction of maximal increase of f from x. Explanation; picture.

75 Chapter 7: First order necessary condition for optimum with Tangent cone. Euler condition for a maximum Consider C a subset of R n, a function f : R n R and consider the problem (P) sup f (x) x C Then if x C is a local solution of (P) (thus also if it is a global solution) and f is differentiable at x, then we must have f x N C ( x). Interpretation: f admits some first order linear develoment. This is possible, for example, when f is C 1 (thus differentiabe is implied by C 1, the converse maybe false).

76 Chapter 7: First order necessary condition for optimum with Tangent cone. Proof. Example: Maximize x 2 2x + y under constraint: x 0, y 0. Thus, important to know what does look like N C when C is defined by inequalities or equalities...

77 Chapter 7: First order necessary condition for optimum with Tangent cone. Euler condition for a minimum Consider C a subset of R n, a function f : R n R and consider the problem (P) inf x C f (x) Then if x C is a a local solution of (P) (thus also if it is a global solution) and f is differentiable at x, then we should have f x N C ( x).

78 Chapter 7: First order necessary condition for optimum with Tangent cone. Euler condition for a maximum Consider C a subset of R n, a function f : R n R and consider the problem (P) sup f (x) x C Then if x C is a a local solution of (P) (thus also if it is a global solution) and f is differentiable at x, then we should have f x N C ( x).

79 Chapter 7: First order necessary condition for optimum with Tangent cone. Critical points Consider C a subset of R n, a function f : R n R and consider the problem (P) inf x C f (x) (P) sup x C f (x) Then if x is a local solution of (P) or a local solution of (Q) (thus alos if it s a global solution) and x is an an interior point to C, and if f is differentiable at x, then we should have We say that x is a critical point of f. f x = 0 Be carefull: if x is a critical point, it s possible x is neither a solution of (P) nor a solution of (Q).

80 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Consider f 1,..., f k some C 1 functions from R n to R, where k n. Define C = {x R n : f 1 (x) =... = f k (x) = 0} Could we have a formula for the normal cone of C at some point x? Yes if C is regular at x: Regularity (or qualification) conditions for system of equalities The set of constraints defined by f 1 (x) =... = f k (x) = 0 is regular if for every x R n such that f 1 ( x) =... = f k ( x) = 0 (that is x C), the n k matrix whose columns are f 1 x,,..., f k x, has a rank k.

81 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Example 1: f (x, y) = x 2 + y 2 and g(x, y) = x + y. Is the set of constraints defined by f = g = 0 regular? No! Example 2: f (x, y, z) = x + y + z and g(x, y) = x 2 + y 2 + 2z 2 1. Is the set of constraints defined by f = g = 0 regular? Yes!

82 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Submanifold of R n Consider f 1,..., f k some C 1 functions from R n to R, where k n. Assume the set of constraints is regular. Then C = {x R n : f 1 (x) =... = f k (x) = 0} is called a submanifold of R n. Its dimension dim(c) is n k. If k = 1, C is called a hypersurface. Intepretation: Locally, C can be parametrized by only n k coordinates, because the regular equations allow to eliminate (locally) k variables. It can be formalized by Implicit function theorem.

83 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Implicit function theorem: the case of one variable Example1: f (x, y) = ax + by + c = 0 (implicit-form equation) Possible to write it in an explicit-form: y = f (x), if and only if... Sometimes, possible to pass from implicit-form to explicit-form only for x on some neighborhood (that is, locally). Example2: f (x, y) = x 2 + y 2 1 = 0

84 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Implicit function theorem: the case of one variable Implicit function theorem (one variable): Let U and V two open subsets of R. Let f : U V R which is C 1. Let ( x, ȳ) U V such that f ( x, ȳ) = 0 and f y ( x, ȳ) 0. Then: (1) there exists U x and Vȳ open neighborhood of x and ȳ. (2) There exists a C 1 function g : U x Vȳ such that For every (x, y) U x Vȳ, f (x, y) = 0 is equivalent to y = g(x). Moreover, we have g ( x) = f x ( x,ȳ) f y ( x,ȳ).

85 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Explanation The last equality is a consequence of chain rule: reminders! Let f : R n R be C 1 (notation: f (u 1,..., u n )), and for every i = 1,..., n, let u i : R n R be C 1 functions (notations: u i (x 1,..., x n )). Chain rule: How computing the gradient of g(x 1,..., x n ) = f (u 1 (x 1,..., x n ), u 2 (x 1,..., x n ),..., u n (x 1,..., x n )). g(x 1,...,x n) x i = n f k=1 u k (u 1 (x),..., u n (x)). u k x i (x). Example: gradient of g(x, y) = f (x + y, x 2 y), using derivative of f?

86 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Explanation: now, let us prove g ( x) = f x ( x,ȳ) f y ( x,ȳ) Remark f (x, g(x)) = 0 for every x U x. Thus the derivative with respect to x at x should be zero. Then use chain rule.

87 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Implicit function theorem: the case of several variables Example 2: Consider the implicit-form equations: f 1 (y 1,..., y n, x 1,..., x p ) = a 11 y 1 + a 12 y a 1n y n + a 1n+1 x a 1n+p x p = 0 f 2 (y 1,..., y n, x 1,..., x p ) = a 21 y 1 + a 22 y a 2n y n + a 2n+1 x a 2n+p x p = 0... f n (y 1,..., y n, x 1,..., x p ) = a n1 y 1 + a n2 y a nn y n + a nn+1 x a nn+p x p = 0 Possible to write it in a explicit-form (y 1,..., y n ) = g(x 1,..., x p ) (in a unique way) if and only if...

88 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Implicit function theorem Implicit function theorem: the case of several variables Implicit function theorem (n + p variables, n equations): Let U and V two open subsets of R p and R n. Let f 1,..., f n from U V R which are C 1. Let ( x, ȳ) = ( x 1,..., x p, ȳ 1,..., ȳ n ) U V such that f ( x, ȳ) = 0 and (f 1,...,f n) (y 1,...,y n) ( x, ȳ) invertible (as a matrix). Then: (1) there exists U ( x1,..., x p) and V (ȳ1,...,ȳ n) open neighborhood of ( x 1,..., x p ) and (ȳ 1,..., ȳ n ). (2) There exists a C 1 function g = (g 1,..., g n ) : U ( x1,..., x p) V (ȳ1,...,ȳ n) such that For every (x, y) U ( x1,..., x p) V (ȳ1,...,ȳ n), f (x 1,..., x p, y 1,..., y n ) = 0 is equivalent to (y 1,..., y n ) = g(x 1,..., x p ). Moreover, we have g (x 1,...,x ( x) = ( (f 1,...,f n) p) (y 1,...,y ( x, n) ȳ)) 1. (f 1,...,f n) (x 1,...,x p) ( x, ȳ).

89 Chapter 8: Computation of Normal cone if C is defined by system of equalities. Computation of N C for regular system of equalities Consider f 1,..., f k some C 1 functions from R n to R, where k n. Assume the set of constraints is regular, and let C = {x R n : f 1 (x) =... = f k (x) = 0}. Then for every x C, and k N C ( x) = { λ i fx, ī λ i R} = Span{ f 1 x,..., f k x }. i=1 Example: hypersurface. T C ( x) = {h R n : f ī x.h = 0, i = 1,..., n}.

90 Chapter 9: Lagrange multipliers and first order necessary condition for regular system of equalities. Lagrange multipliers Consider f 1,..., f k some C 1 functions from R n to R, where k n. Assume the set of constraints is regular, and let C = {x R n : f 1 (x) =... = f k (x) = 0}. consider the problem (P) min x C f (x) Then if x C is a solution of (P) and f is differentiable at x, then there exists some reals λ 1,...λ k such that: f x = k λ i fx, ī i=1 The coefficients λ i are called Lagrange multipliers.

91 Chapter 9: Lagrange multipliers and first order necessary condition for regular system of equalities. Lagrange multipliers Consider f 1,..., f k some C 1 functions from R n to R, where k n. Assume the set of constraints is regular, and let C = {x R n : f 1 (x) =... = f k (x) = 0}. consider the problem (P) max x C f (x) Then if x C is a solution of (P) and f is differentiable at x, then there exists some reals λ 1,...λ k such that: f x = k λ i fx, ī i=1 The coefficients λ i are called Lagrange multipliers.

92 Chapter 9: Lagrange multipliers and first order necessary condition for regular system of equalities. Sometimes, some author write the necessary first order conditions L( x, λ 1,..., λ k ) = 0 where L( x, λ 1,..., λ k ) = f ( x) λ 1.f 1 ( x)... λ k.f k ( x) is called the Lagrangian function. You try to solve this sytem (with n + k unknown) to find candidates to be solution of the optimization problem.

93 Chapter 9: Lagrange multipliers and first order necessary condition for regular system of equalities. Example 1: Minimize 2x 2 + y 2 under the constraint x + y = 1. (P) max x 2 1 +x x2 n =1 x 1 x 2 x 3...x n

94 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. For constraints defined by inequalities and equalities, we can find similar lagrange multipliers (KKT theorem below), but the conditions are more complex. Again, the only difficulty is to be able to write the Normal cone, which, (again), requires Regularity conditions (see below).

95 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. Intuition when we have inequalities through an example Consider C = {(x, y) R 2 : g(x, y) = x 2 + y 2 1 0}. Then for every ( x, ȳ) C, N C ( x, ȳ) = {µ g ( x,ȳ), µ 0} and T C ( x, ȳ) = {h R 2 : g ( x,ȳ).h 0}. Difference with equality?

96 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. KKT (Karusk, Kuhn and Tucker) Theorem Consider f 1,..., f k, g 1,..., g m some C 1 functions from R n to R, where k n. Assume the set of constraints satisfies regularity (also called qualification constraints) constraints we will see after. Let C = {x R n : f 1 (x) =... = f k (x) = 0, g 1 (x) 0,...,...g m (x) 0}. consider the problem (P) min x C f (x) Then if x C is a solution of (P) and f is differentiable at x, then there exists some reals λ 1,..., λ k, µ 1,..., µ m such that: (i) f x + k i=1 λi f x ī + m j=1 µj gj x = 0, (ii) j = 1,..., m, µ j 0 (Positivity of multiplicators associated to inequalities) (iii) j = 1,..., m, [µ j = 0 or g j ( x) = 0]. (Each inequality constraint is binded or the associated multiplicator is null) (iv) i = 1,..., k, f i ( x) = 0. (Equality constraints satisfied!) (v) j = 1,..., m, g j ( x) 0. (Inequality constraints satisfied!)

97 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. KKT (Karusk, Kuhn and Tucker) Theorem Consider f 1,..., f k, g 1,..., g m some C 1 functions from R n to R, where k n. Assume the set of constraints satisfies regularity (also called qualification constraints) constraints we will see after. Let C = {x R n : f 1 (x) =... = f k (x) = 0, g 1 (x) 0,...,...g m (x) 0}. consider the problem (P) max x C f (x) Then if x C is a solution of (P) and f is differentiable at x, then there exists some reals λ 1,..., λ k, µ 1,..., µ m such that: (i) f x k i=1 λi f x ī m j=1 µj gj x = 0, (ii) j = 1,..., m, µ j 0 (iii) j = 1,..., m, µ j.g j ( x) = 0. (iv) i = 1,..., k, f i ( x) = 0. (v) j = 1,..., m, g j ( x) 0.

98 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. Intuitively, regularity conditions are conditions on the constraints so that we have a nice formula for the normal cone, which allows to have the "simple" KKT condition. Condition 1 A first possible condition that is enough to get KKT theorem is Slater s condition Slater s condition Consider f 1,..., f k, g 1,..., g m some C 1 functions from R n to R, where k n. Slater s conditions are true if: (i) All g j are convex. (ii) All f i are affine. (iii) There exists x feasible point (i.e. it satisfies the constraints) such that for every j such that g j is not affine, we have g j ( x) < 0.

99 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. A second possible condition for which KKT theorem is true are the following regularity s conditions Regularity (or qualification) conditions for system of equalities together with inequalitites The set of constraints defined by f 1 =... = f k = 0, g 1 0,..., g m 0 is regular if for every x R n such that f 1 ( x) =... = f k ( x) = 0, g 1 ( x) 0,..., g m ( x) 0, the n (k + m) matrix whose columns are f 1 x,,..., f k x, g 1 x,,..., g m x, has a rank m + k.

100 Chapter 10: Lagrange multipliers and first order necessary condition for regular system of inequalities and equalities. Example of use of KKT. (P) min x+y 3, 2x+y 2 x2 4x + y 2 6y

101 Lecture 4: Convexity and sufficient conditions for optima. Let us explain the main idea. For some functions, the critical points (solutions of x f = 0) provides optima. For example f (x) = x 2 (here gives global minimum). Also f (x) = x 2 (here gives global maximum). Also f (x) = x 3 3x (here gives local maximum or local minimum). But for f (x) = x 3 does not give any local solution! Thus: in general, find criteria to be able to know if a critical point is a solution, local or global. A possible answer is related to the sign of the second derivative (in general of the hessian).

102 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. What is the intuition? Consider f : R R a C 2 function. The second order development of f at x gives : f ( x + h) = f ( x) + f ( x)h f ( x)h 2 + h 2 ε(h). Thus, if x critical point, f ( x + h) f ( x) = h 2 ( 1 2 f ( x) + ε(h)). Thus, if we know f ( x) > 0 this gives... We knwow generalize to multivariable functions.

103 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Consider f : R n R a C 2 function. From Schwarz theorem, the hessian Hessf (x) is a symmetric real matrix. There is a general theory which allows to say what means a symmetric real matrix is positive, negative,...

104 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Definition (definition 1) A real symmetric matrix M is positive semidefinite if for every n 1-real-matrix X, we have t X.M.X 0. Definition (equivalent definition) A real symmetric matrix M is positive semidefinite if all eigenvalues (exist!)of M are 0. Proof of equivalence: By some reduction theorem, there exists some invertible and orthogonal matrix P such that M = PDP 1 = PD t P, where D is the diagonal matrix of eigenvalues of M. Thus, t X.M.X = t X.PD t P.X = t (Y)DY with Y = t P.X. Thus, t X.M.X 0 if and only if t (Y)DY 0...

105 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Definition (definition 1) A real symmetric matrix M is negative semidefinite if for every n 1-real-matrix X, we have t X.M.X 0. Definition (equivalent definition) A real symmetric matrix M is positive semidefinite if all eigenvalues of M are 0.

106 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Definition A real symmetric matrix M is positive definite if for every n 1-real-matrix X 0, we have t X.M.X > 0. Definition (equivalent definition) A real symmetric matrix M is positive definite if all eigenvalues of M are > 0.

107 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Definition (definition 1) A real symmetric matrix M is positive negative if for every n 1-real-matrix X 0, we have t X.M.X < 0. Definition (equivalent definition) A real symmetric matrix M is positive negative if all eigenvalues of M are < 0.

108 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. Do we have to always compute eigenvalues to know if M is positive, negative,...? No for n = 2 or n = 3.

109 Chapter 11: Convexity and sufficient conditions for optima. section 1: positive, negative hessian. For n = 2 simply use the trace and the determinant. a symetric and real 2 2 matrix M is positive semidefinite if and only if tr(m) 0 and det(m) 0. a symetric and real 2 2 matrix M is positive definite if and only if tr(m) > 0 and det(m) > 0. a symetric and real 2 2 matrix M is negative semidefinite if and only if tr(m) 0 and det(m) 0. a symetric and real 2 2 matrix M is negative definite if and only if tr(m) < 0 and det(m) > 0.

Paris. Optimization. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, Philippe Bich

Paris. Optimization. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, Philippe Bich Paris. Optimization. (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2017. Lecture 1: About Optimization A For QEM-MMEF, the course (3H each week) and tutorial (4 hours each week) from now to october 22. Exam

More information

Paris. Optimization. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, Philippe Bich

Paris. Optimization. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, Philippe Bich Paris. Optimization. (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2017. To wake up You are a bather. You want to go from B to the sea (point S), then to your towel (point A). Where do you have to choose

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Math 164-1: Optimization Instructor: Alpár R. Mészáros Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

M311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3.

M311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3. M311 Functions of Several Variables 2006 CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3. Differentiability 1 2 CHAPTER 1. Continuity If (a, b) R 2 then we write

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization

More information

Technical Results on Regular Preferences and Demand

Technical Results on Regular Preferences and Demand Division of the Humanities and Social Sciences Technical Results on Regular Preferences and Demand KC Border Revised Fall 2011; Winter 2017 Preferences For the purposes of this note, a preference relation

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Introduction. Chapter 1. Contents. EECS 600 Function Space Methods in System Theory Lecture Notes J. Fessler 1.1

Introduction. Chapter 1. Contents. EECS 600 Function Space Methods in System Theory Lecture Notes J. Fessler 1.1 Chapter 1 Introduction Contents Motivation........................................................ 1.2 Applications (of optimization).............................................. 1.2 Main principles.....................................................

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Mathematics II, course

Mathematics II, course Mathematics II, course 2013-2014 Juan Pablo Rincón Zapatero October 24, 2013 Summary: The course has four parts that we describe below. (I) Topology in Rn is a brief review of the main concepts and properties

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Prague, II.2. Integrability (existence of the Riemann integral) sufficient conditions... 37

Prague, II.2. Integrability (existence of the Riemann integral) sufficient conditions... 37 Mathematics II Prague, 1998 ontents Introduction.................................................................... 3 I. Functions of Several Real Variables (Stanislav Kračmar) II. I.1. Euclidean space

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information

ECON 5111 Mathematical Economics

ECON 5111 Mathematical Economics Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers

More information

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review B553 Lecture 3: Multivariate Calculus and Linear Algebra Review Kris Hauser December 30, 2011 We now move from the univariate setting to the multivariate setting, where we will spend the rest of the class.

More information

A LITTLE REAL ANALYSIS AND TOPOLOGY

A LITTLE REAL ANALYSIS AND TOPOLOGY A LITTLE REAL ANALYSIS AND TOPOLOGY 1. NOTATION Before we begin some notational definitions are useful. (1) Z = {, 3, 2, 1, 0, 1, 2, 3, }is the set of integers. (2) Q = { a b : aεz, bεz {0}} is the set

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002 Test 1 September 20, 2002 1. Determine whether each of the following is a statement or not (answer yes or no): (a) Some sentences can be labelled true and false. (b) All students should study mathematics.

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Econ Slides from Lecture 1

Econ Slides from Lecture 1 Econ 205 Sobel Econ 205 - Slides from Lecture 1 Joel Sobel August 23, 2010 Warning I can t start without assuming that something is common knowledge. You can find basic definitions of Sets and Set Operations

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

ECARES Université Libre de Bruxelles MATH CAMP Basic Topology

ECARES Université Libre de Bruxelles MATH CAMP Basic Topology ECARES Université Libre de Bruxelles MATH CAMP 03 Basic Topology Marjorie Gassner Contents: - Subsets, Cartesian products, de Morgan laws - Ordered sets, bounds, supremum, infimum - Functions, image, preimage,

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Chapter 1 Preliminaries

Chapter 1 Preliminaries Chapter 1 Preliminaries 1.1 Conventions and Notations Throughout the book we use the following notations for standard sets of numbers: N the set {1, 2,...} of natural numbers Z the set of integers Q the

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2003

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2003 Test 1 September 26, 2003 1. Construct a truth table to prove each of the following tautologies (p, q, r are statements and c is a contradiction): (a) [p (q r)] [(p q) r] (b) (p q) [(p q) c] 2. Answer

More information

Mid Term-1 : Practice problems

Mid Term-1 : Practice problems Mid Term-1 : Practice problems These problems are meant only to provide practice; they do not necessarily reflect the difficulty level of the problems in the exam. The actual exam problems are likely to

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

Econ Slides from Lecture 10

Econ Slides from Lecture 10 Econ 205 Sobel Econ 205 - Slides from Lecture 10 Joel Sobel September 2, 2010 Example Find the tangent plane to {x x 1 x 2 x 2 3 = 6} R3 at x = (2, 5, 2). If you let f (x) = x 1 x 2 x3 2, then this is

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103 Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

ECON 4117/5111 Mathematical Economics

ECON 4117/5111 Mathematical Economics Test 1 September 23, 2016 1. Suppose that p and q are logical statements. The exclusive or, denoted by p Y q, is true when only one of p and q is true. (a) Construct the truth table of p Y q. (b) Prove

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Principles in Economics and Mathematics: the mathematical part

Principles in Economics and Mathematics: the mathematical part Principles in Economics and Mathematics: the mathematical part Bram De Rock Bram De Rock Mathematical principles 1/65 Practicalities about me Bram De Rock Office: R.42.6.218 E-mail: bderock@ulb.ac.be Phone:

More information

E 600 Chapter 3: Multivariate Calculus

E 600 Chapter 3: Multivariate Calculus E 600 Chapter 3: Multivariate Calculus Simona Helmsmueller August 21, 2017 Goals of this lecture: Know when an inverse to a function exists, be able to graphically and analytically determine whether a

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Math 209B Homework 2

Math 209B Homework 2 Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact

More information

Math Advanced Calculus II

Math Advanced Calculus II Math 452 - Advanced Calculus II Manifolds and Lagrange Multipliers In this section, we will investigate the structure of critical points of differentiable functions. In practice, one often is trying to

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

1 Lagrange Multiplier Method

1 Lagrange Multiplier Method 1 Lagrange Multiplier Method Near a maximum the decrements on both sides are in the beginning only imperceptible. J. Kepler When a quantity is greatest or least, at that moment its flow neither increases

More information

Characterisation of Accumulation Points. Convergence in Metric Spaces. Characterisation of Closed Sets. Characterisation of Closed Sets

Characterisation of Accumulation Points. Convergence in Metric Spaces. Characterisation of Closed Sets. Characterisation of Closed Sets Convergence in Metric Spaces Functional Analysis Lecture 3: Convergence and Continuity in Metric Spaces Bengt Ove Turesson September 4, 2016 Suppose that (X, d) is a metric space. A sequence (x n ) X is

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Problem 1: Compactness (12 points, 2 points each)

Problem 1: Compactness (12 points, 2 points each) Final exam Selected Solutions APPM 5440 Fall 2014 Applied Analysis Date: Tuesday, Dec. 15 2014, 10:30 AM to 1 PM You may assume all vector spaces are over the real field unless otherwise specified. Your

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information