Lecture 1 Introduction

Similar documents
Lecture 1 Introduction and overview

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Lecture 1: Review of linear algebra

MS-E2140. Lecture 1. (course book chapters )

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

MS-E2140. Lecture 1. (course book chapters )

Math 5593 Linear Programming Week 1

8. Geometric problems

15-780: LinearProgramming

CS711008Z Algorithm Design and Analysis

Lectures 6, 7 and part of 8

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Linear and Integer Optimization (V3C1/F4C1)

3. Linear Programming and Polyhedral Combinatorics

8. Geometric problems

Integer Linear Programs

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Introduction and Math Preliminaries

Chapter 1. Preliminaries

A Brief Review on Convex Optimization

9. Geometric problems

Complexity of linear programming: outline

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

MTAEA Vectors in Euclidean Spaces

Lecture 15: October 15

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

AM 121: Intro to Optimization Models and Methods

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Today: Linear Programming

Optimization (168) Lecture 7-8-9

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

3. Linear Programming and Polyhedral Combinatorics

Lecture 1 Introduction and overview

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

CSCI : Optimization and Control of Networks. Review on Convex Optimization

INNER PRODUCT SPACE. Definition 1

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

2. Norm, distance, angle

Linear Algebra. Session 12

5. Orthogonal matrices

Linear and Integer Programming - ideas

Linear Programming Duality

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Nonlinear Programming Models

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

Topics in Theoretical Computer Science April 08, Lecture 8

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Introduction to optimization

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

AN INTRODUCTION TO CONVEXITY

Resource Constrained Project Scheduling Linear and Integer Programming (1)

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Lecture 6: Conic Optimization September 8

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Linear Algebra Massoud Malek

Introduction to Matrix Algebra

Lecture 6 Simplex method for linear programming

Linear Algebra Review. Vectors

Convex Optimization & Machine Learning. Introduction to Optimization

Quantum Computing Lecture 2. Review of Linear Algebra

Lecture Note 5: Semidefinite Programming for Stability Analysis

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

A Strongly Polynomial Simplex Method for Totally Unimodular LP

15. Conic optimization

The Transpose of a Vector

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture 7. Econ August 18

Lecture 20: 6.1 Inner Products

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

A Review of Linear Programming

Theory and Internet Protocols

which has a check digit of 9. This is consistent with the first nine digits of the ISBN, since

III. Linear Programming

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Chapter Introduction

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Basic Concepts in Matrix Algebra

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

web: HOMEWORK 1

2. Review of Linear Algebra

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

An introductory example

Lecture 23: 6.1 Inner Products

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

Transcription:

L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation halfspaces and polyhedra geometrical interpretation 1 1

Linear optimization minimize subject to n j=1 n j=1 n j=1 c j x j a ij x j b i, i = 1,...,m d ij x j = f i, i = 1,...,p n optimization variables: x 1,..., x n (real scalars) problem data (parameters): the coefficients c j, a ij, b i, d ij, f i j c jx j is the cost function or objective function j a ijx j b i and j d ijx j = f i are inequality and equality constraints called a linear optimization problem or linear program (LP) Introduction 1 2

Importance low complexity problems with several thousand variables, constraints routinely solved much larger problems (millions of variables) if problem data are sparse widely available software theoretical worst-case complexity is polynomial wide applicability originally developed for applications in economics and management today, used in all areas of engineering, data analysis, finance,... a key tool in combinatorial optimization extensive theory no simple formula for solution but extensive, useful (duality) theory Introduction 1 3

Example: open-loop control problem single-input/single-output system (input u(t), output y(t) at time t) y(t) = h 0 u(t)+h 1 u(t 1)+h 2 u(t 2)+h 3 u(t 3)+ output tracking problem: minimize deviation from desired output y des (t) max y(t) y des(t) t=0,...,n subject to input amplitude and slew rate constraints: u(t) U, u(t+1) u(t) S variables: u(0),..., u(m) (with u(t) = 0 for t < 0, t > M) solution: can be formulated as an LP, hence easily solved (more later) Introduction 1 4

example step response (s(t) = h t + +h 0 ) and desired output: step response y des (t) 1 1 0 0 0 100 200 1 0 100 200 amplitude and slew rate constraint on u: u(t) 1.1, u(t) u(t 1) 0.25 Introduction 1 5

optimal solution (computed via linear optimization) 1.1 input u(t) 0.25 u(t) u(t 1) 0.0 0.00 1.1 0 100 200 0.25 0 100 200 output and desired output 1 0 1 0 100 200 Introduction 1 6

Example: assignment problem match N people to N tasks each person is assigned to one task; each task assigned to one person cost of assigning person i to task j is a ij combinatorial formulation minimize subject to N i,j=1 N i=1 N j=1 a ij x ij x ij = 1, j = 1,...,N x ij = 1, i = 1,...,N x ij {0,1}, i,j = 1,...,N variable x ij = 1 if person i is assigned to task j; x ij = 0 otherwise N! possible assignments, i.e., too many to enumerate Introduction 1 7

linear optimization formulation minimize subject to N i,j=1 N i=1 N j=1 a ij x ij x ij = 1, j = 1,...,N x ij = 1, i = 1,...,N 0 x ij 1, i,j = 1,...,N we have relaxed the constraints x ij {0,1} it can be shown that at the optimum x ij {0,1} (see later) hence, can solve (this particular) combinatorial problem efficiently (via linear optimization or specialized methods) Introduction 1 8

Brief history 1940s (Dantzig, Kantorovich, Koopmans, von Neumann,... ) foundations, motivated by economics and logistics problems 1947 (Dantzig): simplex algorithm 1950s 60s: applications in other disciplines 1979 (Khachiyan): ellipsoid algorithm: more efficient (polynomial-time) than simplex in worst case, much slower in practice 1984 (Karmarkar): projective (interior-point) algorithm: polynomial-time worst-case complexity, and efficient in practice since 1984: variations of interior-point methods (improved complexity or efficiency in practice), software for large-scale problems Introduction 1 9

Tentative syllabus linear and piecewise-linear optimization polyhedral geometry duality applications algorithms: simplex algorithm, interior-point algorithms, decomposition applications in network and combinatorial optimization extensions: linear-fractional programming introduction to integer linear programming Introduction 1 10

Vectors vector of length n (or n-vector) x = x 1 x 2. x n we also use the notation x = (x 1,x 2,...,x n ) x i is ith component or element (real unless specified otherwise) set of real n-vectors is denoted R n special vectors (with n determined from context) x = 0 (zero vector): x i = 0, i = 1,...,n x = 1 (vector of all ones): x i = 1, i = 1,...,n x = e i (ith basis or unit vector): x i = 1, x k = 0 for k i Introduction 1 11

Matrices matrix of size m n A = A 11 A 12 A 1n A 21. A 22. A 2n. A m1 A m2 A mn A ij (or a ij ) is the i,j element (or entry, coefficient) set of real m n-matrices is denoted R m n vectors can be viewed as matrices with one column special matrices (with size determined from context) X = 0 (zero matrix): X ij = 0 for i = 1,...,m, j = 1,...,n X = I (identity matrix): m = n with X ii = 1, X ij = 0 for i j Introduction 1 12

Operations matrix transpose A T scalar multiplication αa addition A+B and subtraction A B of matrices of the same size product y = Ax of a matrix with a vector of compatible length product C = AB of matrices of compatible size inner product of n-vectors: x T y = x 1 y 1 + +x n y n Introduction 1 13

LP in inner-product notation minimize subject to n j=1 n j=1 n j=1 c j x j a ij x j b i, i = 1,...,m d ij x j = f i, i = 1,...,p inner-product notation c, a i, d i are n-vectors: minimize c T x subject to a T i x b i, i = 1,...,m d T i x = f i, i = 1,...,p c = (c 1,...,c n ), a i = (a i1,...,a in ), d i = (d i1,...,d in ) Introduction 1 14

LP in matrix notation matrix notation minimize subject to n j=1 n j=1 n j=1 c j x j a ij x j b i, i = 1,...,m d ij x j = f i, i = 1,...,p minimize c T x subject to Ax b Dx = f A is m n-matrix with elements a ij, rows a T i D is p n-matrix with elements d ij, rows d T i inequality is component-wise vector inequality Introduction 1 15

Terminology minimize c T x subject to Ax b Dx = f x is feasible if it satisfies the constraints Ax b and Dx = f feasible set is set of all feasible points x is optimal if it is feasible and c T x c T x for all feasible x the optimal value of the LP is p = c T x unbounded problem: c T x unbounded below on feasible set (p = ) infeasible probem: feasible set is empty (p = + ) Introduction 1 16

Euclidean norm x = Vector norms x 2 1 +x2 2 + +x2 n = x T x l 1 -norm and l -norm x 1 = x 1 + x 2 + + x n x = max{ x 1, x 2,..., x n } properties (satisfied by any norm f(x)) f(αx) = α f(x) (homogeneity) f(x+y) f(x)+f(y) (triangle inequality) f(x) 0 (nonnegativity); f(x) = 0 if only if x = 0 (definiteness) Introduction 1 17

Cauchy-Schwarz inequality x y x T y x y holds for all vectors x, y of the same size x T y = x y iff x and y are aligned (nonnegative multiples) x T y = x y iff x and y are opposed (nonpositive multiples) implies many useful inequalities as special cases, for example, n x n x i n x i=1 Introduction 1 18

Angle between vectors the angle θ = (x,y) between nonzero vectors x and y is defined as θ = arccos x T y x y (i.e., x T y = x y cosθ) we normalize θ so that 0 θ π relation between sign of inner product and angle x T y > 0 θ < π 2 x T y = 0 θ = π 2 x T y < 0 θ > π 2 (vectors make an acute angle) (orthogonal vectors) (vectors make an obtuse angle) Introduction 1 19

Projection projection of x on the line defined by nonzero y: the vector ˆty with ˆt = argmin t x ty expression for ˆt: ˆt = xt y y 2 = x cosθ y ( ) ˆty = x T y y y T y y x 0 Introduction 1 20

Hyperplanes and halfspaces hyperplane solution set of one linear equation with nonzero coefficient vector a a T x = b halfspace solution set of one linear inequality with nonzero coefficient vector a a T x b a is the normal vector Introduction 1 21

Geometrical interpretation G = {x a T x = b} H = {x a T x b} a a u = (b/ a 2 )a x G x u 0 x u x u H the vector u = (b/ a 2 )a satisfies a T u = b x is in hyperplane G if a T (x u) = 0 (x u is orthogonal to a) x is in halfspace H if a T (x u) 0 (angle (x u,a) π/2) Introduction 1 22

Example a T x = 0 x 2 a T x = 10 x 2 a = (2,1) a = (2,1) x 1 x 1 a T x = 5 a T x = 5 a T x 3 Introduction 1 23

Polyhedron solution set of a finite number of linear inequalities a T 1x b 1, a T 2x b 2,..., a T mx b m a 1 a 2 a 5 a 3 a 4 intersection of a finite number of halfspaces in matrix notation: Ax b if A is a matrix with rows a T i can include equalities: Fx = g is equivalent to Fx g, Fx g Introduction 1 24

Example x 1 +x 2 1, 2x 1 +x 2 2, x 1 0, x 2 0 x 2 2x 1 +x 2 = 2 x 1 +x 2 = 1 x 1 Introduction 1 25

Example 0 x 1 2, 0 x 2 2, 0 x 3 2, x 1 +x 2 +x 3 5 x 3 (0,0,2) (0,2,2) (1,2,2) (2,0,2) (2,1,2) (2,2,1) (0,2,0) x 2 (2,0,0) (2,2,0) x 1 Introduction 1 26

Geometrical interpretation of LP minimize c T x subject to Ax b c Ax b optimal solution dashed lines (hyperplanes) are level sets c T x = α for different α Introduction 1 27

Example minimize x 1 x 2 subject to 2x 1 +x 2 3 x 1 +4x 2 5 x 1 0, x 2 0 x 1 x 2 = 2 x 2 x 1 x 2 = 1 x 1 x 2 = 5 x 1 x 2 = 0 x 1 x 2 = 4 x 1 optimal solution is (1, 1) x 1 x 2 = 3 Introduction 1 28