Lecture 1 Introduction and overview

Size: px
Start display at page:

Download "Lecture 1 Introduction and overview"

Transcription

1 ESE504 (Fall 2013) Lecture 1 Introduction and overview linear programming example course topics software integer linear programming 1 1

2 Linear program (LP) minimize subject to n c j x j j=1 n a ij x j b i, j=1 n c ij x j = d i, j=1 i =1,...,m i =1,...,p variables: x j problem data: the coefficients c j, a ij, b i, c ij, d i can be solved very efficiently (several 10,000 variables, constraints) widely available general-purpose software extensive, useful theory (optimality conditions, sensitivity analysis,... ) Introduction and overview 1 2

3 Example. Open-loop control problem single-input/single-output system (with input u, output y) y(t) =h 0 u(t)+h 1 u(t 1) + h 2 u(t 2) + h 3 u(t 3) + output tracking problem: minimize deviation from desired output y des (t) max t=0,...,n y(t) y des(t) subject to input amplitude and slew rate constraints: u(t) U, u(t +1) u(t) S variables: u(0),...,u(m) (with u(t) =0for t<0, t>m) solution: can be formulated as an LP, hence easily solved (more later) Introduction and overview 1 3

4 example step response (s(t) =h t + + h 0 ) and desired output: step response y des (t) amplitude and slew rate constraint on u: u(t) 1.1, u(t) u(t 1) 0.25 Introduction and overview 1 4

5 optimal solution 1 output and desired output 1.1 input u(t) u(t) u(t 1) Introduction and overview 1 5

6 Brief history 1930s (Kantorovich): economic applications 1940s (Dantzig): military logistics problems during WW2; 1947: simplexalgorithm 1950s 60s discovery of applications in many other fields (structural optimization, control theory, filter design,... ) 1979 (Khachiyan) ellipsoid algorithm: more efficient (polynomial-time) than simplex in worst case, but slower in practice 1984 (Karmarkar): projective (interior-point) algorithm: polynomial-time worst-case complexity, and efficient in practice 1984 today. many variations of interior-point methods (improved complexity or efficiency in practice), software for large-scale problems Introduction and overview 1 6

7 Course outline the linear programming problem linear inequalities, geometry of linear programming engineering applications signal processing, control, structural optimization... duality algorithms the simplex algorithm, interior-point algorithms large-scale linear programming and network optimization techniques for LPs with special structure, network flow problems integer linear programming introduction, some basic techniques Introduction and overview 1 7

8 Software solvers: solve LPs described in some standard form modeling tools: accept a problem in a simpler, more intuitive, notation and convert it to the standard form required by solvers software for this course (see class website) platforms: Matlab, Octave, Python solvers: linprog (Matlab Optimization Toolbox), modeling tools: CVX (Matlab), YALMIP (Matlab), Thanks to Lieven Vandenberghe at UCLA for his slides Introduction and overview 1 8

9 integer linear program Integer linear program minimize subject to n j=1 c jx j n j=1 a ijx j b i, i =1,...,m n j=1 c ijx j = d i, i =1,...,p x j Z Boolean linear program minimize subject to n j=1 c jx j n j=1 a ijx j b i, i =1,...,m n j=1 c ijx j = d i, i =1,...,p x j {0, 1} very general problems; can be extremely hard to solve can be solved as a sequence of linear programs Introduction and overview 1 9

10 scheduling graph V: Example. Scheduling problem i j n nodes represent operations (e.g., jobs in a manufacturing process, arithmetic operations in an algorithm) (i, j) V means operation j must wait for operation i to be finished M identical machines/processors; each operation takes unit time problem: determine fastest schedule Introduction and overview 1 10

11 Boolean linear program formulation variables: x is, i =1,...,n, s =0,...,T: x is =1if job i starts at time s, x is =0otherwise constraints: 1. x is {0, 1} 2. job i starts exactly once: T x is =1 s=0 3. if there is an arc (i, j) in V, then T sx js s=0 T sx is 1 s=0 Introduction and overview 1 11

12 4. limit on capacity (M machines) at time s: n x is M i=1 cost function (start time of job n): T s=0 sx ns Boolean linear program minimize subject to T s=0 sx ns T s=0 x is =1, i =1,...,n T s=0 sx js T s=0 sx is 1, (i, j) V n i=1 x is M, s =0,...,T x is {0, 1}, i =1,...,n, s=0,...,t Introduction and overview 1 12

13 ESE504 (Fall 2013) Lecture 2 Linear inequalities vectors inner products and norms linear equalities and hyperplanes linear inequalities and halfspaces polyhedra 2 1

14 Vectors (column) vector x R n : x = x 1 x 2. x n x i R: ith component or element of x also written as x =(x 1,x 2,...,x n ) some special vectors: x =0(zero vector): x i =0, i =1,...,n x = 1: x i =1, i =1,...,n x = e i (ith basis vector or ith unit vector): x i =1, x k =0for k i (n follows from context) Linear inequalities 2 2

15 Vector operations multiplying a vector x R n with a scalar α R: αx = αx 1. αx n adding and subtracting two vectors x, y R n : x + y = x 1 + y 1. x n + y n, x y = x 1 y 1. x n y n y 1.5y 0.75x +1.5y 0.75x x Linear inequalities 2 3

16 Inner product x, y R n x, y := x 1 y 1 + x 2 y x n y n = x T y important properties αx, y = α x, y x + y, z = x, z + y, z x, y = y, x x, x 0 x, x =0 x =0 linear function: f : R n R is linear, i.e. f(αx + βy) =αf(x)+βf(y), if and only if f(x) = a, x for some a Linear inequalities 2 4

17 Euclidean norm for x R n we define the (Euclidean) norm as x = x x x2 n = x T x x measures length of vector (from origin) important properties: αx = α x (homogeneity) x + y x + y (triangle inequality) x 0 (nonnegativity) x =0 x =0(definiteness) distance between vectors: dist(x, y) = x y Linear inequalities 2 5

18 angle between vectors in R n : i.e., x T y = x y cos θ Inner products and angles θ = (x, y) =cos 1 x and y aligned: θ =0; x T y = x y x and y opposed: θ = π; x T y = x y x T y x y x and y orthogonal: θ = π/2 or π/2; x T y =0(denoted x y) x T y>0 means (x, y) is acute; x T y<0 means (x, y) is obtuse x x x T y>0 y x T y<0 y Linear inequalities 2 6

19 Cauchy-Schwarz inequality: x T y x y projection of x on y x y θ ( x T ) y y 2 y projection is given by ( x T ) y y 2 y Linear inequalities 2 7

20 Hyperplanes hyperplane in R n : {x a T x = b} (a 0) solution set of one linear equation a 1 x a n x n = b with at least one a i 0 set of vectors that make a constant inner product with vector a =(a 1,...,a n ) (the normal vector) x 0 a ( at x 0 a 2 )a 0 x (a T x = a T x 0 ) in R 2 : a line, in R 3 :aplane,... Linear inequalities 2 8

21 Halfspaces (closed) halfspace in R n : {x a T x b} (a 0) solution set of one linear inequality a 1 x a n x n b with at least one a i 0 a =(a 1,...,a n ) is the (outward) normal x 0 a 0 {x a T x a T x 0 } {x a T x a T x 0 } {x a T x<b} is called an open halfspace Linear inequalities 2 9

22 Affine sets solution set of a set of linear equations a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 1 a m1 x 1 + a m2 x a mn x n = b m intersection of m hyperplanes with normal vectors a i =(a i1,a i2,...,a in ) (w.l.o.g., all a i 0). in matrix notation: with A = Ax = b a 11 a 12 a 1n a 21. a 22. a 2n. a m1 a m2 a mn, b = b 1 b 2. b m Linear inequalities 2 10

23 Polyhedra solution set of system of linear inequalities a 11 x 1 + a 12 x a 1n x n b 1 a m1 x 1 + a m2 x a mn x n b m intersection of m halfspaces, with normal vectors a i =(a i1,a i2,...,a in ) (w.l.o.g., all a i 0) a 1 a 2. a 5 a 3 a 4 Linear inequalities 2 11

24 matrix notation Ax b with A = a 11 a 12 a 1n a 21. a 22. a 2n. a m1 a m2 a mn, b = b 1 b 2. b m Ax b stands for componentwise inequality, i.e., fory, z R n, y z y 1 z 1,...,y n z n Linear inequalities 2 12

25 Examples of polyhedra ahyperplane{x a T x = b}: a T x b, a T x b solution set of system of linear equations/inequalities a T i x b i, i =1,...,m, c T i x = d i, i =1,...,p aslab{x b 1 a T x b 2 } the probability simplex {x R n 1 T x =1, x i 0, i=1,...,n} (hyper)rectangle {x R n l x u} where l<u Linear inequalities 2 13

26 Linear inequalities 2 14

27 ESE504 (Fall 2013) Lecture 3 Geometry of linear programming subspaces and affine sets, independent vectors matrices, range and nullspace, rank, inverse polyhedron in inequality form extreme points degeneracy the optimal set of a linear program 3 1

28 Subspaces S R n (S ) is called a subspace if x, y S, α,β R = αx + βy S αx + βy is called a linear combination of x and y examples (in R n ) S= R n, S = {0} S={αv α R} where v R n (i.e., a line through the origin) S= span(v 1,v 2,...,v k )={α 1 v α k v k α i R}, wherev i R n set of vectors orthogonal to given vectors v 1,...,v k : S = {x R n v1 T x =0,...,vk T x =0} Geometry of linear programming 3 2

29 Independent vectors vectors v 1,v 2,...,v k are independent if and only if α 1 v 1 + α 2 v α k v k =0 = α 1 = α 2 = =0 some equivalent conditions: coefficients of α 1 v 1 + α 2 v α k v k are uniquely determined, i.e., α 1 v 1 + α 2 v α k v k = β 1 v 1 + β 2 v β k v k implies α 1 = β 1,α 2 = β 2,...,α k = β k no vector v i can be expressed as a linear combination of the other vectors v 1,...,v i 1,v i+1,...,v k Geometry of linear programming 3 3

30 Basis and dimension {v 1,v 2,...,v k } is a basis for a subspace S if v 1,v 2,...,v k span S, i.e., S = span(v 1,v 2,...,v k ) v 1,v 2,...,v k are independent equivalently: every v Scan be uniquely expressed as v = α 1 v α k v k fact: for a given subspace S, the number of vectors in any basis is the same, and is called the dimension of S, denoted dim S Geometry of linear programming 3 4

31 Affine sets V R n (V ) is called an affine set if x, y V,α+ β =1 = αx + βy V αx + βy is called an affine combination of x and y examples (in R n ) subspaces V= b + S = {x + b x S}where S is a subspace V= {α 1 v α k v k α i R, i α i =1} V= {x v T 1 x = b 1,...,v T k x = b k} (if V ) every affine set V canbewrittenasv = x 0 + S where x 0 R n, S a subspace (e.g., can take any x 0 V, S = V x 0 ) dim(v x 0 ) is called the dimension of V Geometry of linear programming 3 5

32 A = some special matrices: Matrices a 11 a 12 a 1n a 21. a 22. a 2n. a m1 a m2 a mn Rm n A =0(zero matrix): a ij =0 A = I (identity matrix): m = n and A ii =1for i =1,...,n, A ij =0 for i j A = diag(x) where x R n (diagonal matrix): m = n and A = x x x n Geometry of linear programming 3 6

33 Matrix operations addition, subtraction, scalar multiplication transpose: A T = a 11 a 21 a m1 a 12. a 22. a m2. a 1n a 2n a mn Rn m multiplication: A R m n, B R n q, AB R m q : AB = n i=1 a n 1ib i1 i=1 a n 1ib i2 i=1 a n 1ib iq i=1 a n 2ib i1 i=1 a 2ib i2 n i=1 a 2ib iq... n i=1 a n mib i1 i=1 a n mib i2 i=1 a mib iq Geometry of linear programming 3 7

34 Rows and columns rows of A R m n : A = a T 1 a T 2. with a i =(a i1,a i2,...,a in ) R n a T m columns of B R n q : B = [ b 1 b 2 b q ] with b i =(b 1i,b 2i,...,b ni ) R n for example, can write AB as AB = a T 1 b 1 a T 1 b 2 a T 1 b q a T 2 b 1. a T 2 b 2. a T 2 b q. a T mb 1 a T mb 2 a T mb q Geometry of linear programming 3 8

35 Range of a matrix the range of A R m n is defined as R(A) ={Ax x R n } R m a subspace set of vectors that can be hit by mapping y = Ax the span of the columns of A =[a 1 a n ] R(A) ={a 1 x a n x n x R n } the set of vectors y s.t. Ax = y has a solution R(A) =R m Ax = y can be solved in x for any y the columns of A span R m dim R(A) =m Geometry of linear programming 3 9

36 Interpretations v R(A), w R(A) y = Ax represents output resulting from input x v is a possible result or output w cannot be a result or output R(A) characterizes the achievable outputs y = Ax represents measurement of x y = v is a possible or consistent sensor signal y = w is impossible or inconsistent; sensors have failed or model is wrong R(A) characterizes the possible results Geometry of linear programming 3 10

37 Nullspace of a matrix the nullspace of A R m n is defined as N (A) ={ x R n Ax =0} a subspace the set of vectors mapped to zero by y = Ax the set of vectors orthogonal to all rows of A: where A =[a 1 a m ] T zero nullspace: N (A) ={0} N (A) = { x R n a T 1 x = = a T mx =0 } x can always be uniquely determined from y = Ax (i.e., the linear transformation y = Ax doesn t lose information) columns of A are independent Geometry of linear programming 3 11

38 Interpretations suppose z N(A) y = Ax represents output resulting from input x z is input with no result x and x + z have same result N (A) characterizes freedom of input choice for given result y = Ax represents measurement of x z is undetectable get zero sensor readings x and x + z are indistinguishable: Ax = A(x + z) N (A) characterizes ambiguity in x from y = Ax Geometry of linear programming 3 12

39 Inverse A R n n is invertible or nonsingular if det A 0 equivalent conditions: columns of A are a basis for R n rows of A are a basis for R n N(A) ={0} R(A) =R n y = Ax has a unique solution x for every y R n A has an inverse A 1 R n n,withaa 1 = A 1 A = I Geometry of linear programming 3 13

40 Rank of a matrix we define the rank of A R m n as rank(a) =dimr(a) (nontrivial) facts: rank(a) =rank(a T ) rank(a) is maximum number of independent columns (or rows) of A, hence rank(a) min{m, n} rank(a)+dimn (A) =n Geometry of linear programming 3 14

41 Full rank matrices for A R m n we have rank(a) min{m, n} we say A is full rank if rank(a) =min{m, n} for square matrices, full rank means nonsingular for skinny matrices (m >n), full rank means columns are independent for fat matrices (m <n), full rank means rows are independent Geometry of linear programming 3 15

42 Sets of linear equations given A R m n, y R m Ax = y solvable if and only if y R(A) unique solution if y R(A) and rank(a) =n general solution set: {x 0 + v v N(A)} where Ax 0 = y A square and invertible: unique solution for every y: x = A 1 y Geometry of linear programming 3 16

43 A =[a 1 a m ] T R m n, b R m Polyhedron (inequality form) P = {x Ax b} = {x a T i x b i, i =1,...,m} a 1 a 2 a 6 a 5 a 3 P is convex: a 4 x, y P, 0 λ 1 = λx +(1 λ)y P i.e., theline segment between any two points in P lies in P Geometry of linear programming 3 17

44 Extreme points and vertices x P is an extreme point if it cannot be written as x = λy +(1 λ)z with 0 λ 1, y, z P, y x, z x P c c T x constant x P is a vertex if there is a c such that c T x<c T y for all y P, y x fact: x is an extreme point x is a vertex (proof later) Geometry of linear programming 3 18

45 Basic feasible solution define I as the set of indices of the active or binding constraints (at x ): a T i x = b i, i I, a T i x <b i, i I define Ā as Ā = a T i 1 a T i 2. a T i k, I = {i 1,...,i k } x is called a basic feasible solution if rank A = n fact: x is a vertex (extreme point) x is a basic feasible solution (proof later) Geometry of linear programming 3 19

46 Example x (1,1) is an extreme point (1,1) is a vertex: unique minimum of c T x with c =( 1, 1) (1,1) is a basic feasible solution: I = {2, 4} and rank A =2,where A = [ ] Geometry of linear programming 3 20

47 Equivalence of the three definitions vertex = extreme point let x be a vertex of P, i.e., thereisac 0such that c T x <c T x for all x P, x x let y, z P, y x, z x : c T x <c T y, c T x <c T z so, if 0 λ 1, then hence x λy +(1 λ)z c T x <c T (λy +(1 λ)z) Geometry of linear programming 3 21

48 extreme point = basic feasible solution suppose x P is an extreme point with a T i x = b i, i I, a T i x <b i, i I suppose x is not a basic feasible solution; then there exists a d 0with a T i d =0, i I and for small enough ɛ>0, y = x + ɛd P, z = x ɛd P we have x =0.5y +0.5z, which contradicts the assumption that x is an extreme point Geometry of linear programming 3 22

49 basic feasible solution = vertex suppose x P is a basic feasible solution and a T i x = b i i I, a T i x <b i i I define c = i I a i;then c T x = i I b i and for all x P, c T x i I b i with equality only if a T i x = b i, i I however the only solution to a T i x = b i, i I, isx ; hence c T x <c T x for all x P Geometry of linear programming 3 23

50 Degeneracy set of linear inequalities a T i x b i, i =1,...,m a basic feasible solution x with a T i x = b i, i I, a T i x <b i, i I is degenerate if #indices in I is greater than n a property of the description of the polyhedron, not its geometry affects the performance of some algorithms disappears with small perturbations of b Geometry of linear programming 3 24

51 Unbounded directions P contains a half-line if there exists d 0, x 0 such that x 0 + td P for all t 0 equivalent condition for P = {x Ax b}: Ax 0 b, Ad 0 fact: P unbounded P contains a half-line P contains a line if there exists d 0, x 0 such that x 0 + td P for all t equivalent condition for P = {x Ax b}: Ax 0 b, Ad =0 fact: P has no extreme points P contains a line Geometry of linear programming 3 25

52 Optimal set of an LP minimize subject to c T x Ax b optimal value: p =min{c T x Ax b} (p = ± is possible) optimal point: x with Ax b and c T x = p optimal set: X opt = {x Ax b, c T x = p } example minimize c 1 x 1 + c 2 x 2 subject to 2x 1 + x 2 1 x 1 0, x 2 0 c =(1, 1): X opt = {(0, 0)}, p =0 c =(1, 0): X opt = {(0,x 2 ) 0 x 2 1}, p =0 c =( 1, 1): X opt =, p = Geometry of linear programming 3 26

53 Existence of optimal points p = if and only if there exists a feasible half-line with c T d<0 {x 0 + td t 0} d x 0 p =+ if and only if P = p is finite if and only if X opt c Geometry of linear programming 3 27

54 property: if P has at least one extreme point and p is finite, then there exists an extreme point that is optimal c X opt Geometry of linear programming 3 28

55 ESE504 (Fall 2013) Lecture 4 The linear programming problem: variants and examples variants of the linear programming problem LP feasibility problem examples and some general applications linear-fractional programming 4 1

56 Variants of the linear programming problem general form minimize c T x subject to a T i x b i, i =1,...,m gi T x = h i, i =1,...,p in matrix notation: where A = a T 1 a T 2. minimize subject to c T x Ax b Gx = h Rm n, G = g T 1 g T 2. Rp n a T m g T p The linear programming problem: variants and examples 4 2

57 inequality form LP minimize c T x subject to a T i x b i, i =1,...,m in matrix notation: minimize subject to c T x Ax b standard form LP minimize c T x subject to gi T x = h i, i =1,...,m x 0 in matrix notation: minimize subject to c T x Gx = h x 0 The linear programming problem: variants and examples 4 3

58 Reduction of general LP to inequality/standard form minimize c T x subject to a T i x b i, i =1,...,m gi T x = h i, i =1,...,p reduction to inequality form: minimize c T x subject to a T i x b i, i =1,...,m gi T x h i, i =1,...,p gi T x h i, i =1,...,p in matrix notation (where A has rows a T i, G has rows gt i ) minimize subject to c T x A G G x b h h The linear programming problem: variants and examples 4 4

59 reduction to standard form: minimize c T x + c T x subject to a T i x+ a T i x + s i = b i, i =1,...,m gi T x+ gi T x = h i, i =1,...,p x +,x,s 0 variables x +, x, s recover x as x = x + x s R m is called a slack variable in matrix notation: where x = x+ x s, c = minimize subject to c c 0, G = c T x G x = h x 0 [ A A I G G 0 ] [ b, h = h ] The linear programming problem: variants and examples 4 5

60 LP feasibility problem feasibility problem: find x that satisfies a T i x b i, i =1,...,m solution via LP (with variables t, x) minimize t subject to a T i x b i + t, i =1,...,m variables t, x if minimizer x, t satisfies t 0, thenx satisfies the inequalities LP in matrix notation: x = [ x t minimize c T x subject to à x b ] [ ] 0, c =, 1 à = [ A 1 ], b = b The linear programming problem: variants and examples 4 6

61 Piecewise-linear minimization piecewise-linear minimization: minimize max i=1,...,m (c T i x + d i) max i (c T i x + d i) c T i x + d i x equivalent LP (with variables x R n, t R): minimize t subject to c T i x + d i t, i =1,...,m in matrix notation: x = [ x t ], c = minimize c T x subject to à x [ ] b 0, à = [ C 1 ] [ ], b = d 1 The linear programming problem: variants and examples 4 7

62 Convex functions f : R n R is convex if for 0 λ 1 f(λx +(1 λ)y) λf(x)+(1 λ)f(y) λf(x) +(1 λ)f(y) x λx +(1 λ)y y The linear programming problem: variants and examples 4 8

63 Piecewise-linear approximation assume f : R n R differentiable and convex 1st-order approximation at x 1 is a global lower bound on f: f(x) f(x 1 )+ f(x 1 ) T (x x 1 ) f(x) x 1 x evaluating f, f at several x i yields a piecewise-linear lower bound: f(x) ( max f(x i )+ f(x i ) T (x x i ) ) i=1,...,k The linear programming problem: variants and examples 4 9

64 Convex optimization problem minimize (f i convex and differentiable) f 0 (x) LP approximation (choose points x j, j =1,...,K): minimize t subject to f 0 (x j )+ f 0 (x j ) T (x x j ) t, j =1,...,K (variables x, t) yields lower bound on optimal value can be extended to nondifferentiable convex functions more sophisticated variation: cutting-plane algorithm (solves convex optimization problem via sequence of LP approximations) The linear programming problem: variants and examples 4 10

65 Norms norms on R n : Euclidean norm x (or x 2 ) = x x2 n l 1 -norm: x 1 = x x n l - (or Chebyshev-) norm: x =max i x i x x =1 x =1 x 1 =1 x 1 The linear programming problem: variants and examples 4 11

66 Norm approximation problems minimize Ax b p x R n is variable; A R m n and b R m are problem data p =1, 2, r = Ax b is called residual r i = a T i x b i is ith residual (a T i is ith row of A) usually overdetermined, i.e., b R(A) (e.g., m>n, A full rank) interpretations: approximate or fit b with linear combination of columns of A b is corrupted measurement of Ax; find least inconsistent value of x for given measurements The linear programming problem: variants and examples 4 12

67 examples: r = r T r: least-squares or l 2 -approximation (a.k.a. regression) r =max i r i :Chebyshev,l, or minimax approximation r = i r i : absolute-sum or l 1 -approximation solution: l 2 : closed form expression x opt =(A T A) 1 A T b (assume rank(a) =n) l 1, l : no closed form expression, but readily solved via LP The linear programming problem: variants and examples 4 13

68 l 1 -approximation problem l 1 -approximation via LP minimize Ax b 1 write as minimize subject to m i=1 y i y Ax b y an LP with variables y, x: minimize subject to c T x à x b with x = [ x y ], c = [ 0 1 ], à = [ A A I I ] [, b = b b ] The linear programming problem: variants and examples 4 14

69 l -approximation problem l -approximation via LP minimize Ax b write as minimize subject to t t1 Ax b t1 an LP with variables t, x: minimize subject to c T x à x b with x = [ x t ], c = [ 0 1 ], à = [ A A 1 1 ] [, b = b b ] The linear programming problem: variants and examples 4 15

70 Example minimize Ax b p for p =1, 2, (A R ) resulting residuals: ri (p =1) ri (p =2) ri (p = ) i i i The linear programming problem: variants and examples 4 16

71 histogram of residuals: number of ri number of ri number of ri r (p =1) r (p =2) r (p = ) p = gives thinnest distribution; p =1gives widest distribution p =1most very small (or even zero) r i The linear programming problem: variants and examples 4 17

72 Interpretation: maximum likelihood estimation m linear measurements y 1,...,y m of x R n : y i = a T i x + v i, i =1,...,m v i : measurement noise, IID with density p y is a random variable with density p x (y) = m i=1 p(y i a T i x) log-likelihood function is defined as log p x (y) = m log p(y i a T i x) i=1 maximum likelihood (ML) estimate of x is ˆx =argmax x m log p(y i a T i x) i=1 The linear programming problem: variants and examples 4 18

73 examples v i Gaussian: p(z) =1/( 2πσ)e z2 /2σ 2 ML estimate is l 2 -estimate ˆx =argmin x Ax y 2 v i double-sided exponential: p(z) =(1/2a)e z /a ML estimate is l 1 -estimate ˆx =argmin x Ax y 1 { (1/a)e z/a z 0 v i is one-sided exponential: p(z) = 0 z<0 ML estimate is found by solving LP minimize 1 T (y Ax) subject to y Ax 0 v i are uniform on [ a, a]: p(z) = { 1/(2a) a z a 0 otherwise ML estimate is any x satisfying Ax y a The linear programming problem: variants and examples 4 19

74 Linear-fractional programming c T x + d minimize f T x + g subject to Ax b f T x + g 0 (asume a/0 =+ if a>0, a/0 = if a 0) nonlinear objective function like LP, can be solved very efficiently equivalent form with linear objective (vars. x, γ): minimize γ subject to c T x + d γ(f T x + g) f T x + g 0 Ax b The linear programming problem: variants and examples 4 20

75 Bisection algorithm for linear-fractional programming given: interval [l, u] that contains optimal γ repeat: solve feasibility problem for γ =(u + l)/2 c T x + d γ(f T x + g) f T x + g 0 Ax b if feasible u := γ; if infeasible l := γ until u l ɛ each iteration is an LP feasibility problem accuracy doubles at each iteration number of iterations to reach accuracy ɛ starting with initial interval of width u l = ɛ 0 : k = log 2 (ɛ 0 /ɛ) The linear programming problem: variants and examples 4 21

76 Generalized linear-fractional programming minimize subject to max i=1,...,k c T i x + d i f T i x + g i Ax b f T i x + g i 0, i =1,...,K equivalent formulation: minimize subject to γ Ax b c T i x + d i γ(fi T x + g i), i =1,...,K fi T x + g i 0, i =1,...,K efficiently solved via bisection on γ each iteration is an LP feasibility problem The linear programming problem: variants and examples 4 22

77 Von Neumann economic growth problem simple model of an economy: m goods, n economic sectors x i (t): activity of sector i in current period t a T i x(t): amount of good i consumed in period t b T i x(t): amount of good i produced in period t choose x(t) to maximize growth rate min i x i (t +1)/x i (t): maximize γ subject to Ax(t +1) Bx(t), x(t +1) γx(t), x(t) 1 or equivalently (since a ij 0): maximize γ subject to γax(t) Bx(t), x(t) 1 (linear-fractional problem with variables x(0), γ) The linear programming problem: variants and examples 4 23

78 Optimal transmitter power allocation m transmitters, mn receivers all at same frequency transmitter i wants to transmit to n receivers labeled (i, j), j =1,...,n transmitter k receiver (i, j) transmitter i A ijk is path gain from transmitter k to receiver (i, j) N ij is (self) noise power of receiver (i, j) variables: transmitter powers p k, k =1,...,m The linear programming problem: variants and examples 4 24

79 at receiver (i, j): signal power: S ij = A iji p i noise plus interference power: I ij = k i A ijkp k + N ij signal to interference/noise ratio (SINR): S ij /I ij problem: choose p i to maximize smallest SINR: maximize subject to A iji p min i i,j k i A ijkp k + N ij 0 p i p max a (generalized) linear-fractional program special case with analytical solution: m =1, no upper bound on p i (see exercises) The linear programming problem: variants and examples 4 25

80 The linear programming problem: variants and examples 4 26

81 ESE504 (Fall 2013) Lecture 5 Applications in control optimal input design robust optimal input design pole placement (with low-authority control) 5 1

82 Linear dynamical system y(t) =h 0 u(t)+h 1 u(t 1) + h 2 u(t 2) + single input/single output: input u(t) R, output y(t) R h i are called impulse response coefficients finite impulse response (FIR) system of order k: h i =0for i>k if u(t) =0for t<0, y(0) y(1) y(2). y(n) = h h 1 h h 2 h 1 h h N h N 1 h N 2 h 0 u(0) u(1) u(2). u(n) a linear mapping from input to output sequence Applications in control 5 2

83 Output tracking problem choose inputs u(t), t =0,...,M (M <N)that minimize peak deviation between y(t) and a desired output y des (t), t =0,...,N, max t=0,...,n y(t) y des(t) satisfy amplitude and slew rate constraints: u(t) U, u(t +1) u(t) S as a linear program (variables: w, u(0),...,u(n)): minimize. subject to w w t i=0 h iu(t i) y des (t) w, t =0,...,N u(t) =0, t = M +1,...,N U u(t) U, t =0,...,M S u(t +1) u(t) S, t =0,...,M +1 Applications in control 5 3

84 example. single input/output, N = 200 step response y des constraints on u: input horizon M = 150 amplitude constraint u(t) 1.1 slew rate constraint u(t) u(t 1) 0.25 Applications in control 5 4

85 output and desired output: y(t), y des (t) optimal input sequence u: 1.1 u(t) 0.25 u(t) u(t 1) Applications in control 5 5

86 Robust output tracking (1) impulse response is not exactly known; it can take two values: (h (1) 0,h(1) 1,...,h(1) k ), (h(2) 0,h(2) 1,...,h(2) k ) design an input sequence that minimizes the worst-case peak tracking error minimize w subject to w t i=0 h(1) i u(t i) y des (t) w, t =0,...,N w t i=0 h(2) i u(t i) y des (t) w, t =0,...,N u(t) =0, t = M +1,...,N U u(t) U, t =0,...,M S u(t +1) u(t) S, t =0,...,M +1 an LP in the variables w, u(0),...,u(n) Applications in control 5 6

87 example step responses outputs and desired output u(t) 0.25 u(t) u(t 1) Applications in control 5 7

88 Robust output tracking (2) h i and v (j) i h 0 (s) h 1 (s). h k (s) = h 0 h 1. h k + s 1 v (1) 0 v (1) 1. v (1) k are given; s i [ 1, +1] is unknown + + s K v (K) 0 v (K) 1. v (K) k robust output tracking problem (variables w, u(t)): min. s.t. w w t i=0 h i(s)u(t i) y des (t) w, t =0,...,N, s [ 1, 1] K u(t) =0, t = M +1,...,N U u(t) U, t =0,...,M S u(t +1) u(t) S, t =0,...,M +1 straightforward (and very inefficient) solution: enumerate all 2 K extreme values of s Applications in control 5 8

89 simplification: we can express the 2 K+1 linear inequalities w t h i (s)u(t i) y des (t) w for all s { 1, 1} K i=0 as two nonlinear inequalities t K h i u(t i)+ i=0 j=1 t i=0 i u(t i) y des(t)+w v (j) t h i u(t i) i=0 K j=1 t i=0 i u(t i) y des(t) w v (j) Applications in control 5 9

90 proof: max s { 1,1} K = = t h i (s)u(t i) i=0 t h i u(t i)+ i=0 t h i u(t i)+ i=0 K t max s j s j { 1,+1} i=0 K t v (j) i u(t i) j=1 j=1 i=0 v (j) i u(t i) and similarly for the lower bound Applications in control 5 10

91 robust output tracking problem reduces to: min. s.t. w t h i=0 i u(t i)+ K j=1 t i=0 v(j) i u(t i) y des (t)+w, t h i=0 i u(t i) K j=1 t i=0 v(j) i u(t i) y des (t) w, u(t) =0, t = M +1,...,N U u(t) U, t =0,...,M S u(t +1) u(t) S, t =0,...,M +1 t =0,...,N t =0,...,N (variables u(t), w) to express as an LP: for t =0,...,N, j =1,...,K, introduce new variables p (j) (t) and constraints t p (j) (t) v (j) i u(t i) p (j) (t) i=0 replace i v(j) i u(t i) by p (j) (t) Applications in control 5 11

92 example (K =6) nominal and perturbed step responses 1 0 design for nominal system output for nominal system output for worst-case system Applications in control 5 12

93 robust design output for nominal system output for worst-case system Applications in control 5 13

94 input-output description: State space description y(t) =H 0 u(t)+h 1 u(t 1) + H 2 u(t 2) + if u(t) =0, t<0: y(0) y(1) y(2). y(n) = H H 1 H H 2 H 1 H H N H N 1 H N 2 H 0 u(0) u(1) u(2). u(n) block Toeplitz structure (constant along diagonals) state space model: x(t +1)=Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) with H 0 = D, H i = CA i 1 B (i >0) x(t) R n is state sequence Applications in control 5 14

95 alternative description: y(0) y(1). y(n) = A I 0 0 B A I 0 0 B I 0 0 B C D C D C 0 0 D x(0) x(1) x(2). x(n) u(0) u(1). u(n) we don t eliminate the intermediate variables x(t) matrix is larger, but very sparse (interesting when using general-purpose LP solvers) Applications in control 5 15

96 Pole placement linear system ż(t) =A(x)z(t), z(0) = z 0 where A(x) =A 0 + x 1 A x p A p R n n solutions have the form z i (t) = k β ik e σ kt cos(ω k t φ ik ) where λ k = σ k ± jω k are the eigenvalues of A(x) x R p is the design parameter goal: place eigenvalues of A(x) in a desired region by choosing x Applications in control 5 16

97 Low-authority control eigenvalues of A(x) are very complicated (nonlinear, nondifferentiable) functions of x first-order perturbation: if λ i (A 0 ) is simple, then λ i (A(x)) = λ i (A 0 )+ p k=1 wi A kv i wi v x k + o( x ) i where w i, v i are the left and right eigenvectors: w i A 0 = λ i (A 0 )w i, A 0 v i = λ i (A 0 )v i low-authority control: use linear first-order approximations for λ i can place λ i in a polyhedral region by imposing linear inequalities on x we expect this to work only for small shifts in eigenvalues Applications in control 5 17

98 Example trusswith30nodes,83bars M d(t)+d d(t)+kd(t) =0 d(t): vector of horizontal and vertical node displacements M = M T > 0 (mass matrix): masses at the nodes D = D T > 0 (damping matrix); K = K T > 0 (stiffness matrix) to increase damping, we attach dampers to the bars: D(x) =D 0 + x 1 D x p D p x i > 0: amount of external damping at bar i Applications in control 5 18

99 eigenvalue placement problem minimize p i=1 x i subject to λ i (M,D(x),K) C, i =1,...,n x 0 an LP if C is polyhedral and we use the 1st order approximation for λ i 5 before 4 eigenvalues 5 after Applications in control 5 19

100 location of dampers Applications in control 5 20

Lecture 1 Introduction and overview

Lecture 1 Introduction and overview ESE504 (Fall 2010) Lecture 1 Introduction and overview linear programming example course topics software integer linear programming 1 1 Linear program (LP) minimize subject to n c j x j j=1 n a ij x j

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

Linear algebra review

Linear algebra review EE263 Autumn 2015 S. Boyd and S. Lall Linear algebra review vector space, subspaces independence, basis, dimension nullspace and range left and right invertibility 1 Vector spaces a vector space or linear

More information

Lecture 4: Linear and quadratic problems

Lecture 4: Linear and quadratic problems Lecture 4: Linear and quadratic problems linear programming examples and applications linear fractional programming quadratic optimization problems (quadratically constrained) quadratic programming second-order

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

9. Geometric problems

9. Geometric problems 9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form Graphical representation

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation

More information

Convex optimization examples

Convex optimization examples Convex optimization examples multi-period processor speed scheduling minimum time optimal control grasp force optimization optimal broadcast transmitter power allocation phased-array antenna beamforming

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Example 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p

Example 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p Convex Optimization in Electrical Engineering Stephen Boyd and Lieven Vandenberghe EE Dept., Stanford ISL EE370 Seminar, 2/9/96 1 Main idea many electrical engineering design problems can be cast as convex

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Lecture 1: Introduction

Lecture 1: Introduction EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

EE263: Introduction to Linear Dynamical Systems Review Session 2

EE263: Introduction to Linear Dynamical Systems Review Session 2 EE263: Introduction to Linear Dynamical Systems Review Session 2 Basic concepts from linear algebra nullspace range rank and conservation of dimension EE263 RS2 1 Prerequisites We assume that you are familiar

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979) Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Linear programming: theory, algorithms and applications

Linear programming: theory, algorithms and applications Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Convex Optimization: Applications

Convex Optimization: Applications Convex Optimization: Applications Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 1-75/36-75 Based on material from Boyd, Vandenberghe Norm Approximation minimize Ax b (A R m

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2) Algorithms and Theory of Computation Lecture 13: Linear Programming (2) Xiaohui Bei MAS 714 September 25, 2018 Nanyang Technological University MAS 714 September 25, 2018 1 / 15 LP Duality Primal problem

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information