AN INTRODUCTION TO CONVEXITY

Size: px
Start display at page:

Download "AN INTRODUCTION TO CONVEXITY"

Transcription

1 AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway

2

3 Contents 1 The basic concepts Is convexity useful? Nonnegative vectors Linear programming Convex sets, cones and polyhedra Linear algebra and affine sets Exercises 14 2 Convex hulls and Carathéodory s theorem Convex and nonnegative combinations The convex hull Affine independence and dimension Convex sets and topology Carathéodory s theorem and some consequences Exercises 32 3 Projection and separation The projection operator Separation of convex sets Exercises 45 4 Representation of convex sets Faces of convex sets The recession cone Inner representation and Minkowski s theorem Polytopes and polyhedra Exercises 63 5 Convex functions Convex functions of one variable Convex functions of several variables Continuity and differentiability Exercises 88 6 Nonlinear and convex optimization Local and global minimum in convex optimization Optimality conditions for convex optimization 93

4 6.3 Feasible direction methods Nonlinear optimization and Lagrange multipliers Nonlinear optimization: inequality constraints An augmented Lagrangian method Exercises 107

5

6 List of Figures 1.1 Some convex functions Some convex sets in the plane Linear programming Linear system and polyhedron A convex cone in R Affine set Convex combinations Convex hull Affine independence Dimensions Compactness and continuity Relative topology Carathéodory s theorem Projection onto a convex set Separation Nontrivial supporting hyperplane Supporting hyperplane and halfspace Outer description Geometry of Farkas lemma Some faces of the unit cube Exposed face Extreme point and halfline Minkowski s theorem, inner description Polytope = bounded polyhedron Convex function Increasing slopes Subdifferential and linear approximation Minimum of a convex function Convexity and epigraph 77

7 5.6 Maximum of convex functions Supporting function KKT conditions 104

8

9

10 Preface. This report is written in connection with a new course (in year 2000!) with the title Convexity and optimization at the University of Oslo. The course aims at upper undergraduate student in (applied) mathematics, statistics or mathematical economics. The main goal of the course is to give an introduction to the subjects of linear programming and convexity. Many universities offer linear programming courses at an undergraduate level, and there are many books around written for that purpose. There are several interesting and important topics that one typically covers in such a course: modeling, the simplex algorithm, duality, sensitivity analysis, implementation issues, applications in network flows, game theory, approximation etc. However, for students in (applied) mathematics (or economics) I believe it is important to understand the neighborhood of linear programming, which is convexity (or convex analysis). Convexity is fundamental to the whole area of optimization, and it is also of great importance in mathematical statistics, economics, functional analysis, approximation theory etc. The purpose of this report is to introduce the reader to convexity. The prerequisites are mainly linear algebra and linear programming (LP) including the duality theorem and the simplex algorithm. In our Convexity and optimization course we first teach LP, now based on the excellent book by Vanderbei, [15]. This book is extremely well written, and explains ideas and techniques elegantly without too many technicalities. The second, and final, part of the course is to go into convexity where this report may be used. There is plenty of material in convexity and the present text gradually became longer than originally planned. As a consequence, there is probably enough material in this report for a separate introductory course in convexity. In our Convexity and optimization course we therefore have to omit some of the material. A classic book in convex analysis is Rockafellar s book [11]. A modern text which treats convex analysis in combination with optimization is [6]. Comprehensive treatments of convex analysis is [16] and [12]. The latter book is an advanced text which contains lots of recent results and historical notes. For a general treatment of convexity with application to theoretical statistics, see [14]. The book [17] also treats convexity in connection with a combinatorial study of polytopes. In this text we restrict the attention to convexity in IR n. However, the reader should know that the notion of convexity makes sense in vector spaces more generally. The whole theory can be directly translated to the case of finite-dimensional vector spaces (as e.g., the set of real m n-dimensional matrices). Many results,

11 v but not all of them, also hold in infinite-dimensional vector spaces; this is treated within the area of functional analysis. Acknowledgment. I would like to thank Bjarne Johannessen for producing the figures and giving suggestions that improved the text. Moreover, I appreciate very much the stimulating environment here at the University of Oslo and I am grateful to my colleagues for support and interesting discussions. GD. Oslo, Jan. 7, In the present edition a number of misprints have been corrected and a few minor changes have been made. GD. Oslo, Dec. 15, Now a new chapter on convex optimization has been added and again some minor changes have been done. GD. Oslo, Oct. 13, Some minor corrections have been made. GD. Oslo, Aug. 25, 2009.

12 Chapter 1 The basic concepts This first chapter introduces convex sets and illustrates how convex sets arise in different contexts. 1.1 Is convexity useful? Many people think that it is, even people not working with convexity! But this may not convince you, so maybe some of our examples below give you some motivation for working your way into the world of convexity. These examples are all presented in an informal style to increase readability. Example (Optimization and convex functions) Often one meets optimization problems where one wants to minimize a real-valued function of n variables, say f(x), where x = (x 1,...,x n ). This arises in e.g., economical applications (cost minimization or profit maximization), in statistical applications (estimation, regression, curve fitting), approximation problems, scheduling and planning problems, image analysis, medical imaging, engineering applications etc. The ideal goal would be to find a point x such that f(x ) f(x) holds for all other points x; such a solution x is called a globally optimal solution. The problem is that most (numerical) methods for minimizing a function can only find a locally optimal solution, i.e., a point x 0 with function value no greater than the function values of points sufficiently near x 0. Unfortunately, although a locally optimal solution is good locally, it may be very poor compared to some other solutions. Thus, for instance, in a cost minimization problem (where x = (x 1,...,x n ) is an activity vector) it would be very good news if we were able to prove (using our mathematical skills!) that our computed locally optimal solution is also a globally optimal solution. In that case we could say to our boss: listen, here is my solution x and no other person can come up with another solution having lower total cost. 1

13 2 CHAPTER 1. THE BASIC CONCEPTS f(x 1, x 2 ) f(x) f(x) x 2 x x x 1 Figure 1.1: Some convex functions If the function f is convex, then it is always true that a locally optimal solution is also globally optimal! We study convex functions in Chapter 5. Some convex functions are illustrated in Fig You will learn what a convex function is, how to decide if a given function is convex, and how to minimize a convex function. At this point you might ask one of the following questions: I recall that a convex function f : IR IR is convex whenever its second derivative is nonnegative, i.e., the graph bends upwards. But what does it mean that a function of several variables is convex? Does the local implies global property above also hold for other functions than the convex ones? Will I meet convex functions in other areas of mathematics, statistics, numerical analysis etc? If the function f is only defined on a subset S of IR n. Can f still be convex? If so, how can we minimize it? And, does the local implies global property still hold? You will get answers to these, and many more, questions. Concerning the last question, we shall see that the set S of points should have a certain property in order to make an extended definition of convexity meaningful. This property is: S is a convex set. Example (Convex set) Loosely speaking a convex set in IR 2 (or IR n ) is a set with no holes. More accurately, a convex set C has the following property: whenever we choose two points in the set, say x, y C, then all points on the line segment between x and y also lie in C. Some examples of convex sets in the plane are: a sphere (ball), an ellipsoid, a point, a line, a line segment, a rectangle, a triangle, see Fig But, for instance, a set with a finite number p of points

14 1.1. IS CONVEXITY USEFUL? 3 is only convex when p = 1. The union of two disjoint (closed) triangles is also nonconvex. Why are convex sets important? They arise in lots of different situations where the convexity property is of importance. For instance, in optimization the set of feasible points is frequently convex. This is true for linear programming and many other important optimization problems. We can say more, the convexity of the feasible set plays a role for the existence of optimal solutions, the structure of the set of optimal solutions, and (very important!) how to solve optimization problems numerically. But convex sets arise in other areas than optimization. For instance, an important area in statistics (both in theory and applications) is estimation where one uses statistical observations to estimate the value of one or more unknown parameters in a model. To measure quality of a solution one uses a loss function and, quite often, this loss function is convex. In statistical decision theory the concept of risk sets is central. Well, risk sets are convex sets. Moreover, under some additional assumption on the statistical setting, these risk sets are very special convex sets, so-called polytopes. We shall study polytopes in detail later. Another example from statistics is the expectation operator. The expectation of a random variable relates to convexity. Assume that X is a discrete variable taking values in some finite set of real numbers, say {x 1,...,x r } with probabilities p i of the event X = x i. Probabilities are all nonnegative and sum to one, so p j 0 and r j=1 p j = 1. The expectation (or mean) of X is the number EX = r p j x j. j=1 It should be regarded as a weighted average of the possible values that X can attain, and the weights are simply the probabilities. Thus, a very likely event (meaning that p j is near one) gets large weight in this sum. Now, in the language of convexity, we say that EX is a convex combination of the numbers x 1,...,x r. We shall work a lot with convex combinations. An extension is when the discrete random variable is a vector, so it attains values in a finite set S = {x 1,...,x r } of points in IR n. The expectation is now defined by EX = r j=1 p jx j which, again, is a convex combination of the points in S. A question: assume that n = 2 and r = 4 and choose some vectors x 1,..., x 4 IR 2. Experiment with some different probabilities p 1,...,p 4 and calculate EX in each case. If you now vary the probabilities as much as possible (nonnegative and sum one), which set of possible expectations do you get?

15 4 CHAPTER 1. THE BASIC CONCEPTS Figure 1.2: Some convex sets in the plane. Example (Approximation) In many applications of mathematics different approximation problems arise. Many such problems are of the following type: given some set S IR n and a vector a S, find a vector x S which is as close to a as possible among elements in S. The form of this problem depends on the set S (and a) and how one measures the distance between vectors. In order to measure distance one may use the Euclidean norm (given by ( x = ( n j=1 x j 2 ) 1/2 ) or some other norm. (We shall discuss different vector norms in Chapter 5). Is there any connection to convexity here? First, norm functions, i.e., functions x x, are convex functions. This is so for all norms, not just the Euclidean norm. Second, a basic question is if a nearest point (to a in S) exists. The answer is yes, provided that S is a closed set. We discuss closed sets (and some topology) in Chapter 2. Next, we may be interested in knowing if there are more than one point that is nearest to a in S. It turns out that if S is a convex set (and the norm is the Euclidean norm), then the nearest point is unique. This may not be so for nonconvex sets. Even more can be said, as a theorem of Motzkin says that... Well, we keep Motzkin s theorem a secret for the time being. Hopefully, you now have an idea of what convexity is and where convexity questions arises. Let us start the work! 1.2 Nonnegative vectors We are here concerned with the set IR n of real vectors x = (x 1,...,x n ). We use boldface symbols for vectors and matrices. The set (vector space) of all real nonnegative matrices with m rows and n columns is denoted by IR m,n. From linear algebra vector we know how to sum vectors and that we can multiply a vector by a scalar (a real number). Convexity deals with inequalities, and it is convenient to say that x IR n is nonnegative if each component x i is nonnegative. We let IR n + denote

16 1.3. LINEAR PROGRAMMING 5 the set of all nonnegative vectors. The zero vector is written O (the dimension is suppressed, but should be clear from the context). We shall frequently use inequalities for vectors, so if x, y IR n we write x y (or y x) and this means that x i y i for i = 1,...,n. Note that this is equivalent to that y x O. Exercise 1.1. Let x 1, x 2, y 1, y 2 IR n and assume that x 1 x 2 and y 1 y 2. Verify that the inequality x 1 +y 1 x 2 +y 2 also holds. Let now λ be a nonnegative real number. Explain why λx 1 λx 2 holds. What happens if λ is negative? Example (The nonnegative real vectors) The sum of two nonnegative numbers is again a nonnegative number. Similarly, we see that the sum of two nonnegative vectors is a nonnegative vector. Moreover, if we multiply a nonnegative vector by a nonnegative number, we get another nonnegative vector. These two properties may be summarized by saying that IR n + is closed under addition and multiplication by nonnegative scalars. We shall see that this means that IR n + is a convex cone, a special type of convex set. Exercise 1.2. Think about the question in Exercise 1.1 again, now in light of the properties explained in Example Exercise 1.3. Let a IR n + and assume that x y. Show that a T x a T y. What happens if we do not require a to be nonnegative here? 1.3 Linear programming A linear programming problem (LP problem, for short) is an optimization prob- linear lem where one wants to maximize or minimize some linear function c T x of the programming variable vector x = (x 1,...,x n ) over a certain set. This set is the solution set of a problem system of linear equations and inequalities in x. More specifically, an LP problem in standard form is maximize c 1 x c n x n subject to a 11 x a 1n x n b 1 ;. a m1 x a mn x n b m ; x 1,...,x n 0. (1.1)

17 6 CHAPTER 1. THE BASIC CONCEPTS With our notion of nonnegativity of vectors this LP problem may be written nicely in matrix form as follows maximize subject to c T x Ax b; x O. (1.2) Here A = [a i,j ] is the m n matrix with (i, j)th element being a i,j and b is the column vector with ith component b i. We recall that each vector x is called feasible in the LP problem (1.2) if it satisfies Ax b and x O. Let P be the set of all feasible solutions in (1.2). The properties of this set depend, of course, on the coefficient matrix A and the right-hand side b. But, is there some interesting property that is shared by all such sets P? Yes, it is described next. Example (Linear programming) Let P be the feasible set of (1.2) and assume that P is nonempty. Choose two distinct feasible points, say x 1 and x 2. Thus, x 1, x 2 P and x 1 x 2. What can be said about the vector z = (1/2)x 1 + (1/2)x 2? Geometrically, z is the midpoint on the line segment L in IR n between x 1 and x 2. But does z lie in P? First, we see that z O (recall Example 1.2.1). Moreover, Az = A[(1/2)x 1 +(1/2)x 2 ] = (1/2)Ax 1 +(1/2)Ax 2 (1/2)b+(1/2)b = b again by our rules for calculating with nonnegative vectors. This shows that z does lie in P, so it is also a feasible solution of the LP problem. Now, exactly the same thing happens if we consider another point, say w on the line segment L. We know that w may be written as (1 λ)x 1 +λx 2 (or, if you prefer, x 1 +λ(x 2 x 1 )) for some scalar λ satisfying 0 λ 1. Thus, P has the property that it contains all points on the line segment between two points in P. This is precisely the property that P is convex. An attempt to illustrate the geometry of linear programming is given in Fig. 1.3 (where the feasible region is the solution set of five linear inequalities). convex set 1.4 Convex sets, cones and polyhedra We now define our basic notion. A set C IR n is called convex if (1 λ)x 1 +λx 2 C whenever x 1, x 2 C and 0 λ 1. Geometrically, this means that C contains the line segment between each pair of points in C. In the previous example we showed that the set P = {x IR n : Ax b, x O} (1.3) is convex for all A IR m,n and b IR m. In fact, this set is a very special convex set, called a polyhedron. Polyhedra is the subject of a later chapter. How can we

18 1.4. CONVEX SETS, CONES AND POLYHEDRA 7 c feasible set x c T x = const. c T x : maximum value. Figure 1.3: Linear programming prove that a set is convex? The direct way is to use the definition as we did in Example Later we learn some other useful techniques. How can we verify that a set S is not convex? Well, it suffices to find two points x 1 and x 2 and 0 λ 1 with the property that (1 λ)x 1 + λx 2 S (you have then found a kind of hole in S). Example (The unit ball) The unit ball in IR n is the set B = {x IR n : x 1}, i.e., the set of points with Euclidean distance at most one to the origin. (So x = ( j x j 2 ) 1/2 is the Euclidean, or l 2 -norm, of the vector x IR n ). We shall show that B is convex. To do this we use the definition of convexity combined with the triangle inequality which says that u + v u + v for u, v IR n. So let x, y B and λ [0, 1]. We want to show that (1 λ)x +λy B, i.e., that (1 λ)x + λy 1. We use the triangle inequality (and norm properties) and calculate (1 λ)x+λy (1 λ)x + λy = (1 λ) x +λ y (1 λ)+λ = 1. Therefore B is convex. Exercise 1.4. Show that every ball B(a, r) := {x IR n : x a r} is convex (where a IR n and r 0). Some examples of convex sets in IR 2 are found in Fig By a linear system we mean a finite set of linear equations and/or linear inequalities involving variables x 1,...,x n. For example, the set P in (1.3) was defined as the solution set of a linear system. Consider the linear system x 1 +x 2 = 3, x 1 0, linear system

19 8 CHAPTER 1. THE BASIC CONCEPTS a 1 {x : a T 1 x α 1, a T 2 x α 2, a T 3 x α 3} a 2 a 3 a T 3 x = α 3 Figure 1.4: Linear system and polyhedron polyhedron x 2 0 in the variables x 1, x 2. The solution set is the set of points (x 1, 3 x 1 ) where 0 x 1 3. This linear system may be written differently. For instance, an equivalent form is x 1 + x 2 3, x 1 x 2 3, x 1 0, x 2 0. Here we only have -inequalities and these two systems clearly have the same solution set. From this small example, it should be clear that any linear system may easily be converted to a system (in the same variables) with only linear inequalities of -form, i.e., a linear system ax b. Motivated by these considerations, we define a polyhedron in IR n as a set of the form {x IR n : Ax b} where A IR m,n and b IR m (m is arbitrary, but finite). Thus, a polyhedron is the solution set of a linear system Ax b, see Fig As we observed, this means that the solution set of any linear system is a polyhedron. Moreover, by repeating the argument of Exercise we have the following result. Proposition (Polyhedra). The solution set of any linear system in the variable x IR n is a polyhedron. Every polyhedron is a convex set. Project 1.1 (Different LP forms). Often LP problems are written in different forms than the one in (1.2). For instance, the feasible set may one of the following ones P 0 = {x 0 IR n 0 : A 0 x 0 b 0, x 0 O}; P 1 = {x 1 IR n 1 : A 1 x 1 = b 1, x 1 O}; P 2 = {x 2 IR n 2 : A 2 x 2 b 2 }. (1.4) All these sets are polyhedra as explained above. You are now asked to work out that these three sets are equally general in the sense that each P i may be written as a set P j for all i and j. We have already mentioned how one can write P 0 and P 1 in the form P 2 (rewriting each equation as a pair of -inequalities). Note that, in this process, we could use the same number of variables (so, for

20 1.4. CONVEX SETS, CONES AND POLYHEDRA 9 instance, n 2 = n 1 ). However, this is not so when you write P 2 in the form P 1 (or P 0 ). Actually, we need two techniques for going from P 2 to (say) P 1. The first technique is to introduce equations instead of inequalities. Recall that A 2 x 2 b 2 means that the vector z defined by the equation z = b 2 A 2 x 2 is nonnegative. So, by introducing additional variables you may do the job. Explain the details. The second technique is to make sure that all variables are required to be nonnegative. To see this, we observe that a variable x j with no sign constraint, may be replaced by two nonnegative variables x j and x j by introducing the equation x j = x j x j. The reason is simply that any real number may be written as a difference between two nonnegative numbers. Explain the details in the transformation. Note that it is common to say simply that that a linear system Ax b may be written in the form Ax = b, x O although this may require a different x, A and b. Similar terminology is used for LP problems in different forms. Exercise 1.5. Explain how you can write the LP problem max {c T x : Ax b} in the form max {c T x : Ax = b, x O}. Example (Optimal solutions in LP) Consider an LP problem, for instance max {c T x : x P } where P is a polyhedron in IR n. We assume that the problem has a finite optimal value v := max{c T x : x P }. Recall that the set of optimal solutions is the set F = {x P : c T x = v}. Give an example with two variables and illustrate P, c and F. Next, show that F is a convex set. In fact, F is a polyhedron. Why? We mention that F is a special subpolyhedron of P, contained in the boundary of P. Later we shall study such sets F closer, they are so-called faces of P. For instance, we shall see that there are only finitely many faces of P. Thus, there are only finitely many possible sets of optimal solutions of LP problems with P as the feasible set. Example (Probabilities) Let T = {t 1,...,t n } be a set of n real numbers. Consider a discrete stochastic variable X with values in T and let the probability of the event that X = t j be equal to p j for j = 1,...,n. Probabilities are nonnegative and sum to 1, so the vector of probabilities p = (p 1,...,p n ) lies in the set S n = {x IR n : x O, n j=1 x j = 1}. This set is a polyhedron. It is called the standard simplex in IR n for reasons we explain later.

21 10 CHAPTER 1. THE BASIC CONCEPTS Figure 1.5: A convex cone in R 3 convex cone polyhedral cone finitely generated cone ray Exercise 1.6. Make a drawing of the standard simplices S 1, S 2 and S 3. Verify that each unit vector e j lies in S n (e j has a one in position j, all other components are zero). Each x S n may be written as a linear combination x = n j=1 λ je j where each λ j is nonnegative and n j=1 λ j = 1. How? Can this be done in several ways? A set C IR n is called a convex cone if λ 1 x 1 +λ 2 x 2 C whenever x 1, x 2 C and λ 1, λ 2 0. An example is IR n +, the set of nonnegative vectors in IR n. A convex cone in IR 3 is shown in Fig Note that every (nonempty) convex cone contains O (just let λ 1 = λ 2 = 0 in the definition). Moreover, a convex cone is closed under multiplication by a nonnegative scalar: if x C and λ IR +, then λx C. The reader should verify this property based on the definition. Exercise 1.7. Show that each convex cone is indeed a convex set. There are two examples of convex cones that are important for linear programming. Exercise 1.8. Let A IR m,n and consider the set C = {x IR n : Ax O}. Prove that C is a convex cone. A convex cone of the form {x IR n : Ax O} where A IR m,n is called a polyhedral cone. Let x 1,...,x t IR n and let C(x 1,...,x t ) be the set of vectors of the form t λ j x j where λ j 0 for each j = 1,..., t. j=1 Exercise 1.9. Prove that C(x 1,...,x t ) is a convex cone. A convex cone of the form C(x 1,...,x t ) is called a finitely generated cone, and we say that it is generated by the vectors x 1,...,x t. If t = 1 so C = {λx 1 : λ 0}, C is called a ray. More generally, the set R = {x 0 + λx 1 : λ 0} is called a

22 1.5. LINEAR ALGEBRA AND AFFINE SETS 11 halfline and we say that x 1 is a direction vector for R. Thus, a ray is a halfline halfline starting in the origin. direction vector Later we shall see (and prove) the interesting fact that these two classes of cones coincide: a convex cone is polyhedral if and only if it is finitely generated. Exercise Let S = {(x, y, z) : z x 2 + y 2 } IR 3. Sketch the set and verify that it is a convex set. Is S a finitely generated cone? 1.5 Linear algebra and affine sets Although we assume that the reader is familiar with linear algebra, it is useful to have a quick look at some important linear algebra notions at this point. Here is a small linear algebra project. Project 1.2 (A linear algebra reminder). Linear algebra is the foundation of convex analysis. We should recall two important notions: linear independence linear and linear subspace. algebra concepts Let x 1,...,x t be vectors in IR n. We say that x 1,...,x t are linearly independent if t j=1 λ jx j = O implies that λ 1 =... = λ t = 0. Thus, the only way to linearly write the zero vector O as a linear combination t independent j=1 λ jx j of the given vectors x 1,...,x t is the trivial way with all coefficients λ j being zero. This condition may be expressed in matrix notation when we introduce a matrix x with jth column being the vector x j. Thus, x IR n,t and linear independence of x 1,...,x t means that xλ = O implies that λ = O (we then say that x has full column rank). As a small example, consider the vectors x 1 = (1, 0, 1) and x 2 = (1, 2, 3) in IR 3. These are linearly independent as λ 1 x 1 + λ 2 x 2 implies that λ 2 = 0 (consider the second component) and therefore also λ 1 = 0. Note that any set of vectors containing the zero vector is linearly dependent (i.e., not linearly independent). Show the following: if x = t j=1 λ jx j = t j=1 µ jx j, then λ j = µ j for each j = 1,..., n. Thus, the vector x can only be written as a linear combination of the vectors x 1,...,x t in a unique way. Give an example illustrating that such a uniqueness result does not hold for linearly dependent vectors. We proceed to linear subspaces. Recall that a set L IR n is called a (linear) subspace if it is closed under addition and multiplication with scalars. This means that λ 1 x 1 + λ 2 x 2 L whenever x 1, x 2 L and λ 1, λ 2 IR. A very important fact is that every linear subspace L may be represented in two different ways. First, consider a maximal set of linearly independent vectors, say x 1,...,x t, in L. This means that if we add a vector in L to this set we obtain a linearly dependent set of vectors. Then x 1,...,x t spans L in the sense that L is precisely the set

23 12 CHAPTER 1. THE BASIC CONCEPTS of linear combinations of the vectors x 1,..., x t. Moreover, as explained above due to the linear independence, each vector in L may be written uniquely as a basis linear combination of x 1,...,x t. This set of t vectors is called a basis of L. A crucial fact is that L may have many bases, but they all have the same number of elements. This number t is called the dimension of L. The second representation of a linear subspace L is as the kernel of some matrix. We recall that the kernel kernel (or nullspace) of a matrix A IR m,n is the set of vectors x satisfying Ax = O. (nullspace) This set is denoted by Ker(A). Check that the kernel of any m n matrix is a linear subspace. Next, try to show the opposite, that every linear subspace is the kernel of some matrix. Confer with some linear algebra textbook (hint: orthogonal complements). Why bother with linear subspaces in a text on convexity? One reason is that every linear subspace is a (very special) polyhedron; this is seen from the kernel representation L = {x IR n : Ax = O}. It follows that every linear subspace is a convex set. Prove, using the definitions, that every linear subspace is a convex set. Our final point here is that the two different representations of linear spaces may be generalized to hold for large classes of convex sets. This will be important to us later, but we need to do some more work before these results can be discussed. Linear algebra, of course, is much more than a study of linear subspaces. For instance, one of the central problems is to solve linear systems of equations. Thus, given a matrix A IR m,n and a vector b IR m we want to solve the linear equation Ax = b. Often, we have that m = n and that A is nonsingular (invertible). This means that the columns of a are linearly independent and therefore Ax = b has a unique solution. However, there are many interesting situations where one is concerned with rectangular linear systems, i.e., where the number of equations may not be equal to the number of variables. Examples here are optimization and regression analysis and approximation problems. affine set Now, any linear system of equations Ax = b is also a linear system as we have defined it. Thus, the solution set of Ax = b must be a polyhedron. But, this polyhedron is very special as we shall see next. Project 1.3 (Affine sets). We say that a set C IR n is affine provided that it contains the line through any pair of its points. This means that whenever x 1, x 2 C and λ IR the vector (1 λ)x 1 + λx 2 also lies in C. Note that this vector equals x 1 + λ(x 2 x 1 ) and that, when x 1 and x 2 are distinct, the vector x 2 x 1 is a direction vector for the line through x 1 and x 2.

24 1.5. LINEAR ALGEBRA AND AFFINE SETS 13 For instance, a line in IR n is an affine set. Another example is the set C = {x 0 +λ 1 r 1 +λ 2 r 2 : λ 1, λ 2 IR} which is a two-dimensional plane going through x 0 and spanned by the nonzero vectors r 1 and r 2. See Fig. 1.6 for an example. Show that every affine set is a convex set! Here is the connection between affine sets and linear systems of equations. Let C be the solution set of Ax = b where A IR m,n and b IR m. Show that C is an affine set! In particular, the solution set H of a single equation a T x = α, where a O, is an affine set. Such a set H is called a hyperplane, and the vector a is called a normal vector of the hyperplane. Give some examples of hyperplanes in IR 2, and in IR 3! We say that two hyperplanes H and H are parallel if they have parallel normal vectors. Show that two hyperplanes in IR n that are not parallel must intersect! What kind of set is the intersection? Are there any affine sets that are not the solution set of some system of equations? The answer is no, so we have Proposition (Affine sets). Let C be a nonempty subset of IR n. Then C is an affine set if and only if there is a matrix A IR m,n and a vector b IR m for some m such that C = {x IR n : Ax = b}. Moreover, C may be written as C = L + x 0 = {x + x 0 : x L} for some linear subspace L of IR n. The subspace L is unique. We leave the proof as an exercise. Project 1.4 (Preservation of convexity). Convexity is preserved under several operations, and the next result describes a few of these. We here use som set notation. When A, B IR n their sum is the set A + B = {x + y : x A, y B}. Similarly, when λ IR we let λa := {λx : x A}. In each situation below you should give an example, and try to prove the statement. 1. Let C 1, C 2 be convex sets in IR n and let λ 1, λ 2 be real numbers. Then λ 1 C 1 + λ 2 C 2 is convex. 2. The intersection of any (even infinite) family of convex sets is a convex set (you may have shown this already!). 3. Let T : IR n IR m be an affine transformation, i.e., a function of the form T(x) = Ax + b, for some A IR m,n and b IR m. Then T maps convex sets to convex sets, i.e, if C is a convex set in IR n, then T(C) = {T(x) : x C} is a convex set in IR m.

25 14 CHAPTER 1. THE BASIC CONCEPTS an affine set the parallel linear subspace Figure 1.6: Affine set 1.6 Exercises Exercise Consider the linear system 0 x i 1 for i = 1,..., n and let P denote the solution set. Explain how to solve a linear programming problem max{c T x : x P }. What if the linear system was a i x i b i for i = 1,..., n. Here we assume a i b i for each i. Exercise Is the union of two convex sets again convex? Exercise Determine the sum A + B in each of the following cases: (i) A = {(x, y) : x 2 + y 2 1}, B = {(3, 4)}; (ii) A = {(x, y) : x 2 + y 2 1}, B = [0, 1] {0}; (iii) A = {(x, y) : x + 2y = 5}, B = {(x, y) : x = y, 0 x 1}; (iv) A = [0, 1] [1, 2], B = [0, 2] [0, 2]. Exercise (i) Prove that, for every λ IR and A, B IR n, it holds that λ(a+b) = λa+λb. (ii) Is it true that (λ+µ)a = λa+µa for every λ, µ IR and A IR n? If not, find a counterexample. (iii) Show that, if λ, µ 0 and A IR n is convex, then (λ + µ)a = λa + µa. Exercise Show that if C 1,...,C t IR n are all convex sets, then C 1... C t is convex. Do the same when all sets are affine (or linear subspaces, or convex cones). In fact, a similar result for the intersection of any family of convex sets. Explain this. Exercise Consider a family (possibly infinite) of linear inequalities a T i x b i, i I, and C be its solution set, i.e., C is the set of points satisfying all the inequalities. Prove that C is a convex set.

26 1.6. EXERCISES 15 Exercise Consider the unit disc S = {(x 1, x 2 ) IR 2 : x x 2 2 1} in IR 2. Find a family of linear inequalities as in the previous problem with solution set S. Exercise Is the unit ball B = {x IR n : x 2 1} a polyhedron? Exercise Consider the unit ball B = {x IR n : x 1} is convex. Here x = max j x j is the max norm of x. Show that B is a polyhedron. Illustrate when n = 2. Exercise Consider the unit ball B 1 = {x IR n : x 1 1} is convex. Here x 1 = n j=1 x j is the absolute norm of x. Show that B 1 is a polyhedron. Illustrate when n = 2. Exercise Prove Proposition Exercise Let C be a nonempty affine set in IR n. Define L = C C. Show that L is a subspace and that C = L + x 0 for some vector x 0. SUMMARY OF NEW CONCEPTS AND RESULTS: convex set convex cone (finitely generated, polyhedral) polyhedron linear system linear programming linear algebra: linear independence, linear subspace, representations affine set the norms 1, 2 and

27 16 CHAPTER 1. THE BASIC CONCEPTS

28 Chapter 2 Convex hulls and Carathéodory s theorem We have now introduced our main objects: convex sets and special convex sets (convex cones, polyhedra). In this chapter we investigate these objects further, and a central notion is that of convex combinations of points. We shall define the dimension of a set and study the topological properties of convex sets. 2.1 Convex and nonnegative combinations In convex analysis one is interested in certain special linear combinations of vectors that represent mixtures of points. Consider vectors x 1,...,x t IR n and nonnegative numbers (coefficients) λ j 0 for j = 1,...,t such that t j=1 λ j = 1. Then the vector x = t j=1 λ jx j is called a convex combination of x 1,...,x t IR n, convex see Fig Thus, a convex combinations is a special linear combination where combination the coefficients are nonnegative and sum to one. A special case is when t = 2 and we have a convex combination of two points: λ 1 x 1 + λ 2 x 2 = (1 λ 2 )x 2 + λ 2 x 2. Note that we may reformulate our definition of a convex set by saying that it is closed under convex combinations of each pair of its points. We give a remark on the terminology here. If S IR n is any set, we say that x is a convex combination of points in S if x may be written as a convex combination of a finite number of points in S. Thus, there are no infinite series or convergence questions we need to worry about. Example (Convex combinations) Consider the following four vectors in IR 2 : (0, 0), (1, 0), (0, 1) and (1, 1). The point (1/2, 1/2) is a convex combination of (1, 0) and (0, 1) as we have (1/2, 1/2) = (1/2) (1, 0) + (1/2) (0, 1). We also see that (1/2, 1/2) is a convex combination of the vectors (0, 0) and (1, 1). Thus, a point may have different representations as convex combinations. 17

29 18 CHAPTER 2. CONVEX HULLS AND CARATHÉODORY S THEOREM a) b) 1x 1 + 0x x x 2 x 1 x 2 x x x 2 x 4 x 5 3 x x x 10 5 Figure 2.1: Convex combinations Similarly, we call a vector t j=1 λ jx j a nonnegative combination of the vectors nonnegative x 1,..., x t when λ 1,...,λ t 0. It is clear that every convex combination is also combination a nonnegative combination, and that every nonnegative combination is a linear combination. Exercise 2.1. Illustrate some combinations (linear, convex, nonnegative) of two vectors in IR 2. The following result says that a convex set is closed under the operation of taking convex combinations. This is similar to a known fact for linear subspaces: they are closed under linear combinations. Proposition (Convex sets). A set C IR n is convex if and only if it contains all convex combinations of its points. A set C IR n is a convex cone if and only if it contains all nonnegative combinations of its points. Proof. If C contains all convex combinations of its points, then this also holds for combinations of two points, and then C must be convex. Conversely, assume that C is convex. We prove that C contains every convex combination of t of its elements using induction on t. When t = 2 this is clearly true as C is convex. Assume next that C contains any convex combination of t 1 elements (where t 3). Let x 1,...,x t C and λ j > 0 for j = 1,...,t where t j=1 λ j = 1. Thus, 0 < λ 1 < 1 (if λ 1 = 1 we would get t = 1). We have that ( ) x = λ 1 x 1 + (1 λ 1 ) t (λ j /(1 λ 1 ))x j. j=2

30 2.2. THE CONVEX HULL 19 Note that t j=2 λ j/(1 λ 1 ) = 1, and each element is nonnegative. Therefore the vector y = t j=2 (λ j/(1 λ 1 ))x j is a convex combination of t 1 elements in C so y C, by the induction hypothesis. Moreover, x is a convex combination of x 1 and y, both in C, and therefore x C as desired. The result concerning conical combinations is proved similarly. Exercise 2.2. Choose your favorite three points x 1, x 2, x 3 in IR 2, but make sure that they do not all lie on the same line. Thus, the three points form the corners of a triangle C. Describe those points that are convex combinations of two of the three points. What about the interior of the triangle C, i.e., those points that lie in C but not on the boundary (the three sides): can these points be written as convex combinations of x 1, x 2 and x 3? If so, how? 2.2 The convex hull Consider two distinct points x 1, x 2 IR n (let n = 2 if you like). There are many convex sets that contain both these points. But, is there a smallest convex set that contains x 1 and x 2? It is not difficult to answer this positively. The line segment L between x 1 and x 2 has these properties: it is convex, it contains both points and any other convex set containing x 1 and x 2 must also contain L. Note here that L is precisely the set of convex combinations of the two points x 1 and x 2. Similarly, if x 1, x 2, x 3 are three points in IR 2 (or IR n ) not all on the same line, then the triangle T that they define must be the smallest convex set containing x 1, x 2 and x 3. And again we note that T is also the set of convex combinations of x 1, x 2, x 3 (confer Exercise 2.2). More generally, let S IR n be any set. Define the convex hull of S, denoted by convex hull conv(s) as the set of all convex combinations of points in S (see Fig. 2.2). The convex hull of two points x 1 and x 2, i.e., the line segment between the two points, is often denoted by [x 1, x 2 ]. An important fact is that conv(s) is a convex set, whatever the set S might be. Thus, taking the convex hull becomes a way of producing new convex sets. Exercise 2.3. Show that conv(s) is convex for all S IR n. (Hint: look at two convex combinations j λ jx j and j µ jy j, and note that both these points may be written as a convex combination of the same set of vectors.) Exercise 2.4. Give an example of two distinct sets S and T having the same convex hull. It makes sense to look for a smallest possible subset S 0 of a set S such that S = conv(s 0 ). We study this question later. Exercise 2.5. Prove that if S T, then conv(s) conv(t).

31 20 CHAPTER 2. CONVEX HULLS AND CARATHÉODORY S THEOREM a) b) conv(s) The set S Figure 2.2: Convex hull The following proposition tells us that the convex hull of a set S is the smallest convex set containing S. Recall that the intersection of an arbitrary family of sets consists of the points that lie in all of these sets. Proposition (Convex hull). Let S IR n. Then conv(s) is equal to the intersection of all convex sets containing S. Thus, conv(s) is the smallest convex set containing S. Proof. From Exercise 2.3 we have that conv(s) is convex. Moreover, S conv(s); just look at a convex combination of one point! Therefore W conv(s) where W is defined as the intersection of all convex sets containing S. Now, consider a convex set C containing S. Then C must contain all convex combinations of points in S, this follows from Proposition But then conv(s) C and we conclude that W (the intersection of such sets C) must contain conv(s). This concludes the proof. What we have just done concerning convex combinations may be repeated for conical nonnegative combinations. Thus, when S IR n we define the conical hull of S, hull denoted by cone(s) as the set of all nonnegative combinations of points in S. This set is always a convex cone. Moreover, we have that Proposition (Conical hull). Let S IR n. Then cone(s) is equal to the intersection of all convex cones containing S. Thus, cone(s) is the smallest convex cone containing S. The proof is left as an exercise. Exercise 2.6. If S is convex, then conv(s) = S. Show this! Exercise 2.7. Let S = {x IR 2 : x 2 = 1}, this is the unit circle in IR 2. Determine conv(s) and cone(s).

32 2.2. THE CONVEX HULL 21 Example (LP and convex cones) Consider a linear programming problem max{c T x : x P } where P = {x IR n : Ax = b, x O}. We have that Ax = n j=1 x ja j where a j is the jth column of a. Thus, P is nonempty if and only if b cone({a 1,...,a n }). Moreover, a point x is feasible in the LP problem (i.e., x P) if its components x j are the coefficients of a j when b is represented as a nonnegative combination of a 1,..., a n. We have seen that by taking the convex hull we produce a convex set whatever set we might start with. If we start with a finite set, a very interesting class of convex sets arise. A set P IR n is called a polytope if it is the convex hull of a polytope finite set of points in IR n. Polytopes have been studied a lot during the history of mathematics. Some polytopes are illustrated in Fig The convex set in Fig. 2.2 b) is not a polytope. Today polytope theory is still a fascinating subject with a lot of activity. One of the reasons is its relation to linear programming, because most LP problems have a feasible set which is a polytope. In fact, we shall later prove an important result in polytope theory saying that a set is a polytope if and only if it is a bounded polyhedron. Thus, in LP problems with bounded feasible set, this set is really a polytope. Example (LP and polytopes) Consider a polytope P = conv({x 1,..., x t }). We want to solve the optimization problem ( ) max{c T x : x P } where c IR n. As mentioned above, this problem is an LP problem, but we do not worry too much about this now. The interesting thing is the combination of a linear objective function and the fact that the feasible set is a convex hull of finitely many points. To see this, consider an arbitrary feasible point x P. Then x may be written as a convex combination of the points x 1,...,x t, say x = t j=1 λ jx j for some λ j 0, j = 1,...,t where j λ j = 1. Define now v = max j c T x j. We then calculate c T x = c T j λ j x j = t λ j c T x j j=1 t t λ j v = v λ j = v. j=1 j=1 Thus, v is an upper bound for the optimal value in the optimization problem ( ). We also see that this bound is attained whenever λ j is positive only for those indices j satisfying c T x j = v. Let J be the set of such indices. We conclude that the optimal solutions of the problem ( ) is the set conv({x j : j J})

33 22 CHAPTER 2. CONVEX HULLS AND CARATHÉODORY S THEOREM which is another polytope (contained in P). The procedure just described may be useful computationally if the number t of points defining P is not too large. In some cases, t is too large, and then we may still be able to solve the problem ( ) by different methods, typically linear programming related methods. affinely independent vectors 2.3 Affine independence and dimension We know what we mean by the dimension dim(l) of a linear subspace L of IR n : dim(l) is the cardinality of a basis in L, or equivalently, the maximal number of linearly independent vectors lying in L. This provides a starting point for defining the dimension of more general sets, in fact any set, in IR n. The forthcoming definition of dimension may be loosely explained as follows. Let S be a set and pick a point x 1 in S. We want (the undefined) dimension of S to tell how many (linearly) independent directions we can move in, starting from x and still hit some point in S. For instance, consider the case when S is convex (which is of main interest here). Say that we have a point x S and can find other points x 1,...,x t that also lie in S. Thus, by convexity we can move from x in each of the directions x j x for j = 1,..., t and still be in S (if we do not go too far). If the vectors x 1 x,..., x t x are linearly independent, and t is largest possible, we say that S has dimension t. We now make these ideas more precise. First, we introduce the notion of affine independence. A set of vectors x 1,..., x t IR n are called affinely independent if t j=1 λ jx j = O and t j=1 λ j = 0, imply that λ 1 =... = λ t = 0. This definition resembles the definition of linear independence except for the extra condition that the sum of the λ s is zero. Note that if a set of vectors is linearly independent, then it is also affinely independent. In fact, there is a useful relationship between these two notions as the next proposition tells us. Proposition (Affine independence). The vectors x 1,...,x t IR n are affinely independent if and only if the t 1 vectors x 2 x 1,...,x t x 1 are linearly independent. Proof. Let x 1,...,x t IR n be affinely independent and assume that λ 2,...,λ t IR and t j=2 λ j(x j x 1 ) = O. Then ( t j=2 λ j)x 1 ) + t j=2 λ jx j = O. Note here that the sum of all the coefficients is zero, so by affine independence of x 1,...,x t we get that λ 2 =... = λ t = 0. This proves that x 2 x 1,...,x t x 1 are linearly independent. Conversely, let x 2 x 1,...,x t x 1 be linearly independent and assume t j=1 λ jx j = O and t j=1 λ j = 0. Then λ 1 = t j=2 λ j and therefore O = ( t j=2 λ j)x 1 + t j=2 λ jx j = t j=2 λ j(x j x 1 ). But, as x 2 x 1,...,x t x 1

34 2.3. AFFINE INDEPENDENCE AND DIMENSION 23 x 2 x 1 x 3 x 2 x 1 x 3 x 1 Figure 2.3: Affine independence are linearly independent, we must have λ 2 =... = λ t = 0 and therefore also λ 1 = t j=2 λ j = 0. In the example shown in Fig. 2.3 the vectors x 1, x 2, x 3 are affinely independent and the vectors x 2 x 1 and x 3 x 1 are linearly independent. Exercise 2.8. Does affine independence imply linear independence? Does linear independence imply affine independence? Prove or disprove! A useful property of affine independence is that this property still holds whenever all our vectors are translated with a fixed vector as discussed in the next exercise. Exercise 2.9. Let x 1,...,x t IR n be affinely independent and let w IR n. Show that x 1 + w,..., x t + w are also affinely independent. We can now, finally, define the dimension of a set. The dimension of a set S IR n, dimension denoted by dim(s), is the maximal number of affinely independent points in S minus 1. So, for example in IR 3, the dimension of a point and a line is 0 and 1 respectively, and the dimension of the plane x 3 = 0 is 2. See Fig. 2.4 for some examples. Exercise Let L be a linear subspace of dimension (in the usual linear algebra sense) t. Check that this coincides with our new definition of dimension above. (Hint: add O to a suitable set of vectors). Consider a convex set C of dimension d. Then there are (and no more than) d + 1 affinely independent points in C. Let S = {x 1,...,x d+1 } denote a set of such points. Then the set of all convex combinations of these vectors, i.e., conv(s), is a polytope contained in C and dim(s) = dim(c). Moreover, let A be the set of all vectors of the form t j=1 λ jx j where t j=1 λ j = 1 (no sign restriction of the λ s). Then A is an affine set containing C, and it is the smallest affine set with this property. A is called the affine hull of C. Exercise Prove the last statements in the previous paragraph. affine hull

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Lecture 1: Convex Sets January 23

Lecture 1: Convex Sets January 23 IE 521: Convex Optimization Instructor: Niao He Lecture 1: Convex Sets January 23 Spring 2017, UIUC Scribe: Niao He Courtesy warning: These notes do not necessarily cover everything discussed in the class.

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Basic convexity. 1.1 Convex sets and combinations. λ + μ b (λ + μ)a;

Basic convexity. 1.1 Convex sets and combinations. λ + μ b (λ + μ)a; 1 Basic convexity 1.1 Convex sets and combinations AsetA R n is convex if together with any two points x, y it contains the segment [x, y], thus if (1 λ)x + λy A for x, y A, 0 λ 1. Examples of convex sets

More information

Lecture 6 - Convex Sets

Lecture 6 - Convex Sets Lecture 6 - Convex Sets Definition A set C R n is called convex if for any x, y C and λ [0, 1], the point λx + (1 λ)y belongs to C. The above definition is equivalent to saying that for any x, y C, the

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Week 3: Faces of convex sets

Week 3: Faces of convex sets Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 0: Vector spaces 0.1 Basic notation Here are some of the fundamental sets and spaces

More information

Chapter 2. Convex Sets: basic results

Chapter 2. Convex Sets: basic results Chapter 2 Convex Sets: basic results In this chapter, we introduce one of the most important tools in the mathematical approach to Economics, namely the theory of convex sets. Almost every situation we

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the

More information

Normal Fans of Polyhedral Convex Sets

Normal Fans of Polyhedral Convex Sets Set-Valued Analysis manuscript No. (will be inserted by the editor) Normal Fans of Polyhedral Convex Sets Structures and Connections Shu Lu Stephen M. Robinson Received: date / Accepted: date Dedicated

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

This chapter reviews some basic geometric facts that we will need during the course.

This chapter reviews some basic geometric facts that we will need during the course. Chapter 1 Some Basic Geometry This chapter reviews some basic geometric facts that we will need during the course. 1.1 Affine Geometry We will assume that you are familiar with the basic notions of linear

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Farkas Lemma. Rudi Pendavingh. Optimization in R n, lecture 2. Eindhoven Technical University. Rudi Pendavingh (TUE) Farkas Lemma ORN2 1 / 15

Farkas Lemma. Rudi Pendavingh. Optimization in R n, lecture 2. Eindhoven Technical University. Rudi Pendavingh (TUE) Farkas Lemma ORN2 1 / 15 Farkas Lemma Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture 2 Rudi Pendavingh (TUE) Farkas Lemma ORN2 1 / 15 Today s Lecture Theorem (Farkas Lemma, 1894) Let A be an m n matrix,

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication. This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication. Copyright Pearson Canada Inc. All rights reserved. Copyright Pearson

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

1 Maximal Lattice-free Convex Sets

1 Maximal Lattice-free Convex Sets 47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 3 Date: 03/23/2010 In this lecture, we explore the connections between lattices of R n and convex sets in R n. The structures will prove

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Chapter 2. Vectors and Vector Spaces

Chapter 2. Vectors and Vector Spaces 2.1. Operations on Vectors 1 Chapter 2. Vectors and Vector Spaces Section 2.1. Operations on Vectors Note. In this section, we define several arithmetic operations on vectors (especially, vector addition

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012 Discrete Geometry Austin Mohr April 26, 2012 Problem 1 Theorem 1 (Linear Programming Duality). Suppose x, y, b, c R n and A R n n, Ax b, x 0, A T y c, and y 0. If x maximizes c T x and y minimizes b T

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Linear algebra. S. Richard

Linear algebra. S. Richard Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Auerbach bases and minimal volume sufficient enlargements

Auerbach bases and minimal volume sufficient enlargements Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Math Advanced Calculus II

Math Advanced Calculus II Math 452 - Advanced Calculus II Manifolds and Lagrange Multipliers In this section, we will investigate the structure of critical points of differentiable functions. In practice, one often is trying to

More information

Appendix B Convex analysis

Appendix B Convex analysis This version: 28/02/2014 Appendix B Convex analysis In this appendix we review a few basic notions of convexity and related notions that will be important for us at various times. B.1 The Hausdorff distance

More information

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Math 24 Spring 2012 Questions (mostly) from the Textbook

Math 24 Spring 2012 Questions (mostly) from the Textbook Math 24 Spring 2012 Questions (mostly) from the Textbook 1. TRUE OR FALSE? (a) The zero vector space has no basis. (F) (b) Every vector space that is generated by a finite set has a basis. (c) Every vector

More information

Convex Geometry. Carsten Schütt

Convex Geometry. Carsten Schütt Convex Geometry Carsten Schütt November 25, 2006 2 Contents 0.1 Convex sets... 4 0.2 Separation.... 9 0.3 Extreme points..... 15 0.4 Blaschke selection principle... 18 0.5 Polytopes and polyhedra.... 23

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Convexity, Duality, and Lagrange Multipliers

Convexity, Duality, and Lagrange Multipliers LECTURE NOTES Convexity, Duality, and Lagrange Multipliers Dimitri P. Bertsekas with assistance from Angelia Geary-Nedic and Asuman Koksal Massachusetts Institute of Technology Spring 2001 These notes

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Ellipsoidal Mixed-Integer Representability

Ellipsoidal Mixed-Integer Representability Ellipsoidal Mixed-Integer Representability Alberto Del Pia Jeff Poskin September 15, 2017 Abstract Representability results for mixed-integer linear systems play a fundamental role in optimization since

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Linear programming: theory, algorithms and applications

Linear programming: theory, algorithms and applications Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication

More information

Measures. Chapter Some prerequisites. 1.2 Introduction

Measures. Chapter Some prerequisites. 1.2 Introduction Lecture notes Course Analysis for PhD students Uppsala University, Spring 2018 Rostyslav Kozhan Chapter 1 Measures 1.1 Some prerequisites I will follow closely the textbook Real analysis: Modern Techniques

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007 Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory

More information

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) Tim Roughgarden February 2, 2016 1 Recap This is our third lecture on linear programming, and the second on linear programming

More information

Convex Sets with Applications to Economics

Convex Sets with Applications to Economics Convex Sets with Applications to Economics Debasis Mishra March 10, 2010 1 Convex Sets A set C R n is called convex if for all x, y C, we have λx+(1 λ)y C for all λ [0, 1]. The definition says that for

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

Construction of a general measure structure

Construction of a general measure structure Chapter 4 Construction of a general measure structure We turn to the development of general measure theory. The ingredients are a set describing the universe of points, a class of measurable subsets along

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Convex Optimization and an Introduction to Congestion Control. Lecture Notes. Fabian Wirth

Convex Optimization and an Introduction to Congestion Control. Lecture Notes. Fabian Wirth Convex Optimization and an Introduction to Congestion Control Lecture Notes Fabian Wirth August 29, 2012 ii Contents 1 Convex Sets and Convex Functions 3 1.1 Convex Sets....................................

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Exercises: Brunn, Minkowski and convex pie

Exercises: Brunn, Minkowski and convex pie Lecture 1 Exercises: Brunn, Minkowski and convex pie Consider the following problem: 1.1 Playing a convex pie Consider the following game with two players - you and me. I am cooking a pie, which should

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information