Algorithms & Data Structures 2
|
|
- Peter Patrick
- 6 years ago
- Views:
Transcription
1 Algorithms & Data Structures 2 Optimization WS2017 B. Anzengruber-Tanase (Institute for Pervasive Computing, JKU Linz) (Institute for Pervasive Computing, JKU Linz)
2 WHAT IS OPTIMIZATION? Optimization problems are made up of three basic ingredients: An objective function which we want to minimize or maximize Examples: maximize the profit or minimize the cost in a manufacturing process in fitting experimental data to a user-defined model, minimize the total deviation of observed data from predictions based on the model In designing an automobile panel, we might want to maximize the strength. A set of unknowns or variables which affect the value of the objective function Examples: In the manufacturing problem, the variables might include the amounts of different resources used or the time spent on each activity In fitting-the-data problem, the unknowns are the parameters that define the model In the panel design problem, the variables used define the shape and dimensions of the panel. A set of constraints that allow the unknowns to take on certain values but exclude others Examples: For the manufacturing problem, it does not make sense to spend a negative amount of time on any activity, so we constrain all the "time" variables to be non-negative. In the panel design problem, we would probably want to limit the weight of the product and to constrain its shape. The optimization problem is then: Find values of the variables that minimize or maximize the objective function while satisfying the constraints. Algorithms & Datastructures 2 // 2017W // 2
3 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 3
4 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 4
5 FEASIBILITY PROBLEM In some cases the goal is to find a set of variables that satisfies the constraints of the model. The user does not particularly want to optimize anything so there is no reason to define an objective function. This type of problems is usually called a feasibility problem or constraint satisfaction problem. Examples: 8 Queens Problem (n-queens Problem) Graph Colouring Solution Techniques: From the standpoint of computational complexity, finding out if problem has a feasible solution might be essentially as hard as actually finding the optimal solution Because if there exists no feasible solution, then the entire solution space must be explored to prove this. There are no shortcuts in general, unless something useful about the model's structure is known (e.g., if solving some form of a transportation problem, it may be possible to assure feasibility by checking that the sources add up to at least as great a number as the sum of the destinations). Algorithms & Datastructures 2 // 2017W // 5
6 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 6
7 MULTIPLE OBJECTIVE FUNCTIONS Optimization of a number of different objectives at once. Usually, the different objectives are not compatible, i.e. the variables that optimize one objective may be far from optimal for the others. Example: In the panel design problem, it would be nice to minimize weight and maximize strength simultaneously. Solution Techniques Goal Programming: problems with multiple objectives are reformulated as single-objective problems by either forming a weighted combination of the different objectives or by replacing some of the objectives by constraints Pareto preference analysis essentially brute force examination of all possible solutions Priority Ordering Put objective functions in priority order, optimize on one objective, then change it to a constraint fixed at the optimal value (perhaps subject to a small tolerance), and repeat with the next function. Algorithms & Datastructures 2 // 2017W // 7
8 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 8
9 Constraints linear any SINGLE OBJECTIVE FUNCTION Constrained Continuous Optimization all variables are allowed to take values from subintervals of real numbers the values that variables are allowed to take are bounded by constraints Solution Techniques depend on characteristics of objective function, constraints, and variables variables deterministic stochastic objective function any quadratic linear Nonlinearly Constrained Algorithms & Datastructures 2 // 2017W // 9 Network Programming Bound Constrained Quadratic Programming Linear Programming Semidefinite Progr. Stochastic Programming
10 LINEAR PROGRAMMING (LP) The basic problem of linear programming is to minimize a linear objective function of continuous real variables, subject to linear constraints. For purposes of describing and analyzing algorithms, the problem is often stated in the standard form min{c T x: Ax = b, x 0 } or min{c T x: Ax + y = b, L (x, y) U} where xr n is the vector of unknowns, cr n... cost vector, AR mxn... constraint matrix. allowing general bounds on the variables The feasible region described by the constraints is a polytope, or simplex, and at least one member of the solution set lies at a vertex (corner) of this polytope. Example: min 3x 1-2x 2 s.t. x 1, x 2 0 x 1 19 x x 1 + x 2 30 x 1 - x 2 5 Algorithms & Datastructures 2 // 2017W // 10 brute force solution: compute value of objective function for each corner find minimum x 1 x 2 c T x /3 20/3 65/3 3/ x 2 x x 1-2x 2 =c geometric solution: draw line 3x 1-2x 2 =c reduce c, until 3x 1-2x 2 =c leaves feasible region x x 1 2x 1 + x 2 30 x 1 - x 2 5
11 LINEAR PROGRAMMING (LP) Solution Techniques: A is generally not square, hence it is not possible to solve an LP by just inverting A. Usually A has more columns (unknowns) than rows (constraints) (n>m), and Ax=b is therefore quite likely to be under-determined, leaving great latitude in the choice of x with which to minimize c T x. Brute-Force possible, but there exist better techniques: The simplex algorithm (so named because of the geometry of the feasible set) underlies the vast majority of available software packages for linear programming generates a sequence of feasible iterates by repeatedly moving from one vertex (corner) of the feasible set to an adjacent vertex with a lower value of the objective function. When it is not possible to find an adjoining vertex with a lower value of the objective function, the current vertex must be optimal, and termination occurs. Note, that computational complexity in the worst case is equal to brute force (all corners are visited) Barrier or interior-point methods, by contrast, visit points within the interior of the feasible region. These methods derive from techniques for nonlinear programming that were developed and popularized in the 1960s by Fiacco and McCormick, but their application to linear programming dates back only to Karmarkar's innovative analysis in Most interior-point methods have polynomial complexity. Algorithms & Datastructures 2 // 2017W // 11
12 SIMPLEX METHOD Algebraically speaking, the simplex method is based on observation that at least n-m of the components of x are zero if x is a vertex of the feasible set. Accordingly, the components of x can be partitioned at each vertex into a set of m basic variables - all nonnegative - and a set of n-m nonbasic variables-all zero. If we gather the basic variables into a subvector, x B R m, and the nonbasics into another subvector, x N R n-m, we can partition the columns of A as [B N], where B contains the m columns that correspond to x B. (B is then square.) At each iteration of the simplex method, a basic variable (a component of x B ) is reclassified as nonbasic, and vice versa, i.e. x B and x N exchange a component. (Geometrically, this swapping process corresponds to a move from one vertex of the feasible set to an adjacent vertex.) We therefore need to choose which component of x N should enter x B (that is, be allowed to move off its zero bound) and which component of x B should enter x N (that is, be driven to zero). In fact, we need make only the first of these choices, since the second choice is implied by the feasibility constraints Ax=b and x 0. Algorithms & Datastructures 2 // 2017W // 12
13 SIMPLEX METHOD (CONTD.) In selecting the entering component, we note that c T x can be expressed as a function of x N alone. We can express x B in terms of x N by noting that Ax=b implies that x B = B -1 (b - N x N ). Hence, partitioning c into c B and c N in the obvious way, we have c T x = c BT x B + c NT x N = c NT B -1 b + d nt x n where the vector d n = [c N -N T B -T c B ] is called the reduced-cost vector. If all components of x B are strictly positive and some component (say, the i th component) of d n is negative, we can decrease the value of c T x by allowing component i of x N to become positive while adjusting x B to maintain feasibility. Unless there exist feasible points x that make c T x arbitrarily negative, the requirement x B 0 imposes an upper bound on x N,i, the i th component of x N. In principle, can choose any component x N,i with d N,i <0 as entering variable. If there are no negative entries in d n, the current point x is optimal. If there is more than one, we would ideally pick the component that will lead to the largest reduction in c T x on the current iteration: x N,i = min{(b -1 -b) j /(B -1 N i ) j : (B -1 N i ) j > 0} The index j that achieves the minimum in this formula indicates the basic variable x j that is to become nonbasic. If more than one such component achieves the minimum simultaneously, the one with the largest value of (B -1 N i ) is usually selected. Algorithms & Datastructures 2 // 2017W // 13
14 If all elements in pivot column are negative, then problem does not have a feasible solution -> stop! SIMPLEX METHOD USING A TABLEAU Simplex Iteration: If all d n >0, the x B is optimal solution Else choose column c with minimum value of d n ( pivot column, with x N,m+c being the variable entering the set of basic variables) choose row r with minimum value of b i /a ij considering only those rows where a ij >0 ( pivot row, with x B,r leaving the basis) use a rc as the pivot element Compute new tableau values as follows Exchange x B,r and x n,m+c Compute new coefficient values a rj in pivot row:and new basic variable value Compute new coefficient values in pivot column:and new reduced cost value Compute all other new coefficient values and new basic variable values and new reduced cost values Compute new value of objective function: Algorithms & Datastructures 2 // 2017W // 14 Simplex Tableau: basic variables non-basic variables name value x N,m+1 x N,m+2... x N,n x B, 1 b 1 x B, 2 b coefficients a ij.. with i=1..m and j=1..(n-m) x B, m b m value of d N,m+1 d N,m+2... d N,n c T x c (reduced costs) 1 / a rc for j = c a rj = a rj / a rc else b r = b r / a rc a ic = a ic / a rc for i r d c = d c / a rc a ij = a ij a rj * a ic for i r, j c b i = b i b r * a ic for i r d j = d j d c * a rj for j c c = c + b r * d c
15 SIMPLEX METHOD EXAMPLE Consider the following LP minimize 4x 1 6x 2 + 9x s.t. x i 0 for i=1..8 2x 1 x 2 + 3x 3 = 1 x 4 3x 3 = 2 x 5 2x 1 = 6 x 6 x 2 = 6 x 7-2x 1 + x 2 3x 3 = 2 x 8 exchange variables x 1 x 2 x 3 x x x x x Note: problem is given in so called canonical form, which can be translated directly into tableau. If problem is given in another form, then transformations are necessary (including for example the introduction of additional variables) Selecting Pivot row: min b i /a i2 for all i where a i2 >0 i.e. min{6/1;2/1} i.e. choose x 8 Algorithms & Datastructures 2 // 2017W // 15 pivot element Pivot column, defined by minimum negative reduced cost value
16 SIMPLEX METHOD EXAMPLE Consider the following LP minimize 4x 1 6x 2 + 9x s.t. x i 0 for i=1..8 2x 1 x 2 + 3x 3 = 1 x 4 3x 3 = 2 x 5 2x 1 = 6 x 6 x 2 = 6 x 7-2x 1 + x 2 3x 3 = 2 x 8 previous tableau for reference x 1 x 8 x 3 x x x x x x 1 x 2 x 3 x x x x x Compute new value of pivot element a rc = 1/ a rc = 1/1 = 1 Compute pivot row with a rj = a rj / a rc for j c and b r = b r / a rc (values not changed, because pivot element was 1) Algorithms & Datastructures 2 // 2017W // 16
17 SIMPLEX METHOD EXAMPLE Consider the following LP minimize 4x 1 6x 2 + 9x s.t. x i 0 for i=1..8 2x 1 x 2 + 3x 3 = 1 x 4 3x 3 = 2 x 5 2x 1 = 6 x 6 x 2 = 6 x 7-2x 1 + x 2 3x 3 = 2 x 8 Algorithms & Datastructures 2 // 2017W // 17 Compute new basic variable values b i = b i b r a ic for i r b 1 = b 1 -b 5 a 12 =1-2(-1) = 3 b 2 = b 2 -b 5 a 22 =2-20 = 2 b 3 = b 3 -b 5 a 32 =6-20 = 6 b 4 = b 4 -b 5 a 42 =6-21 = 4 Compute new value of objective function: c = c + b r d c c = c + b 5 d 2 = (-6) = 54 Compute pivot column with a ic = a ic / a rc for i r and d c = d c / a rc previous tableau for reference x 1 x 8 x 3 x x x x x Compute all other new coefficient values: a ij = a ij a rj a ic for i r, j c a 11 = a 11 -a 51 a 12 =2-(-2)(-1) = 0 a 21 = a 21 -a 51 a 22 =0-(-2)0 = 0 a 31 = a 31 -a 51 a 32 =2-(-2)0 = 2 a 41 = a 41 -a 51 a 42 =0-(-2)1 = 2 a 13 = a 13 -a 53 a 12 =3-(-3)(-1) = 0 a 23 = a 23 -a 53 a 22 =3-(-3)0 = 3 a 33 = a 33 -a 53 a 32 =0-(-3)0 = 0 a 43 = a 43 -a 53 a 42 =0-(-3)1 = 3 Compute new reduced cost values d j = d j d c a rj for j c x 1 x 2 x 3 x x x x x d 1 = d 1 d c a 51 = 4-(-6)(-2)= -8 d 3 = d 3 d c a 53 = 9-(-6)(-3)=
18 SIMPLEX METHOD EXAMPLE Algorithms & Datastructures 2 // 2017W // 18 x 1 x 8 x 3 x x x x x Optimum solution obtained! x 4 =3 x 5 =2 x 6 =2 x 1 =2 x 2 =6 x 7 =x 8 =x 3 =0 x 1 x 8 x 5 x x 3 2/ /3 x x x x 7 x 8 x 3 x x x x 1 2 1/2-1/2 3/2 x x 7 x 8 x 5 x x 3 2/ /3 x x 1 1 1/2 1-1/2 x
19 SEMIDEFINITE PROGRAMMING (SDP) The (linear) semidefinite programming problem (SDP) is essentially an ordinary linear program where the nonnegativity constraint is replaced by a semidefinite constraint on matrix variables. The standard form for the problem is min{c X: A k X = b k (k=1..m); X 0} where C, A k and X are all symmetric (nxn) matrices, b k is a scalar, and the constraint X 0 means that X, the unknown matrix, must lie in the closed, convex cone of positive semidefinite. Here, refers to the standard inner product on the space of symmetric matrices. Solution Technique Interior-Point Algorithms Algorithms & Datastructures 2 // 2017W // 19
20 NONLINEARLY CONSTRAINED PROGRAMMING (NLP) The general constrained optimization problem is to minimize a nonlinear function subject to nonlinear constraints. Two equivalent formulations of this problem are useful for describing algorithms. They are min{f(x): c i (x) 0, i I, c i (x) = 0, i E} (1) where each ci is a mapping from R n to R n, and I and E are index sets for inequality and equality constraints, respectively; and min{f(x): c(x) = 0, l x u} (2) where c maps from R n to R n, and the lower- and upper-bound vectors, l and u, may contain some infinite components. Solution Techniques The main techniques that have been proposed for solving constrained optimization problems are reduced-gradient methods, sequential linear and quadratic programming methods, and methods based on augmented Lagrangians and exact penalty functions. Algorithms & Datastructures 2 // 2017W // 20
21 BOUND CONSTRAINED OPTIMIZATION Bound-constrained optimization problems play an important role in the development of software for the general constrained problem because many constrained codes reduce the solution of the general problem to the solution of a sequence of bound-constrained problems. The problem is formulated as follows min{f(x): l x u} where the lower- and upper-bound vectors, l and u, may contain some infinite components Solution Techniques Newton Methods Gradient-Projection Methods Algorithms & Datastructures 2 // 2017W // 21
22 QUADRATIC PROGRAMMING (QP) The quadratic programming problem involves minimization of a quadratic function subject to linear constraints. The following formulation is commonly used: min{½ x T Qx+c T x: c it x b i, i I, c i Tx = b i, i E} where Q R n x R n respectively. is symmetric, and the index sets I and E specify the inequality and equality constraints, Solution Techniques The difficulty of solving the quadratic programming problem depends largely on the nature of the matrix Q. In convex quadratic programs, which are relatively easy to solve, the matrix Q is positive semidefinite. If Q has negative eigenvalues - nonconvex quadratic programming - then the objective function may have more than one local minimizer. Algorithms & Datastructures 2 // 2017W // 22
23 NETWORK PROGRAMMING Network problems arise in applications that can be represented as the flow of a commodity in a network. The resulting programs can be linear or non-linear. Due to the network structure of the model, fast techniques for solving these problems can be developed. Examples: Transportation Problem. We have a commodity that can be produced in different locations and needs to be shipped to different distribution centers. Given the cost of shipping a unit of commodity between each two points, the capacity of each production center, and the demand at each distribution center, find the minimal cost shipping plan. Assignment Problem (special case of the transportation problem). There are n individuals that need to be assigned to n different tasks. Each individual is assigned to one job only and each job is performed by one person. Given the cost that each individual charges for performing each of the n jobs, find a minimal cost assignment. Maximum Value Flow. Given a directed network of roads that connects two cities and the capacities of these roads, find the maximum number of units (cars) that can be routed from one city to another. Here, the constraints are the equilibrium or balance equations at each node (or road intersection); i.e., flow of the cars into a node is equal to the flow of the cars out of that node. Shortest Path Problem. Given a directed network and the length of each arc in this network, find a shortest between two given nodes. Minimum Cost Flow Problem. Given a directed network with upper and lower capacities on each of its arcs, and given a set of external flows (positive or negative) that need to be routed through this network, find the minimal cost routing of the given flows through this network. Here, the cost per unit of flow on each arc is assumed to be known. Algorithms & Datastructures 2 // 2017W // 23
24 STOCHASTIC PROGRAMMING All of the model formulations encountered thus far in the Optimization Tree have assumed that the data for the given problem are known accurately. However, for many actual problems, the problem data cannot be known accurately for a variety of reasons, e.g. due to simple measurement error that some data represent information about the future (e.g., product demand or price for a future time period) and simply cannot be known with certainty Problem Formulation It is written in the form of a mathematical program extended to a parameter space whose values are random variables (generally with known distribution function). Solution Techniques: forming a certainty equivalent. Average value. Replace all random variables with their means. Chance constraint. Given a stochastic program with a random variable, p, in its constraint: g(x; p) <= 0, a certainty equivalent is to replace this with the constraint, P[g(x; p) <= 0] >= a, where P[] is a (known) probability function on the range of g, and a is some acceptance level (a=1 means the constraint must hold for all values of p, except on a set of measure zero) Recourse model (see integer programming) Algorithms & Datastructures 2 // 2017W // 24
25 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 25
26 SINGLE OBJECTIVE FUNCTION Unconstrained Continuous Optimization In the unconstrained optimization problem, we seek a local minimizer of a real-valued function, f(x), where x is a vector of real variables. In other words, we seek a vector, x*, such that f(x*) <= f(x) for all x close to x*. Note: It's been argued that almost all problems really do have constraints. For example, any variable denoting the "number of objects" in a system can only be useful if it is less than the number of elementary particles in the known universe! In practice though, answers that make good sense in terms of the underlying physical or economic problem can often be obtained without putting constraints on the variables. Algorithms & Datastructures 2 // 2017W // 26
27 NONLINEAR LEAST SQUARES PROBLEM Least squares problems often arise in data-fitting applications. The nonlinear least squares problem has the general form min{r(x): x R n } where r is the function defined by r(x)= f(x) 2 for some vector-valued function f that maps R n to R n. Solution Techniques Gauss-Newton Method Levenberg-Marquardt Method Algorithms & Datastructures 2 // 2017W // 27
28 NONLINEAR EQUATIONS Systems of nonlinear equations arise as constraints in optimization problems, but also arise, for example, when differential and integral equations are discretized. In solving a system of nonlinear equations, we seek a vector such that f(x)=0 where x is an n-dimensional vector of n variables. Most solution algorithms are closely related to algorithms for unconstrained optimization and nonlinear least squares. Indeed, algorithms for systems of nonlinear equations usually proceed by seeking a local minimizer to the problem min{ f(x) : x R n } for some norm, usually the 2-norm. This strategy is reasonable, since any solution of the nonlinear equations is a global solution of the minimization problem. Solution Techniques Newton s Method Algorithms & Datastructures 2 // 2017W // 28
29 GLOBAL OPTIMIZATION Global optimization algorithms try to find an x* that minimizes f(x) over all possible vectors x. Solution Techniques any (purposeful) search procedure to seek a solution to a global optimization problem blind search methods include: Breadth-first: expanding a node of least depth Depth-first: expanding a node of greatest depth (requiring backtracking) Search strategies guided by a heuristic function Branch-and-Bound Heuristic search strategies are often based on some biological metaphor: Ant colony optimization, based on how ants solve problems Genetic algorithm, based on genetics and evolution Neural networks, based on how the brain functions Simulated annealing, based on thermodynamics Tabu search, based on memory-response Target analysis, based on learning Algorithms & Datastructures 2 // 2017W // 29
30 NONDIFFERENTIABLE OPTIMIZATION Problems that require the optimization of a nondifferentiable function Problems that require the optimization of a nondifferentiable function often occur in the efficient solution of large linear programming problems and in the context of Lagrangian duality. Solution Techniques Interior Point Method Cutting-Plane Method Algorithms & Datastructures 2 // 2017W // 30
31 Optimization CLASSIFICATION The word "Programming" is used here in the sense of "planning" Feasability Problem Discrete Integer Programming Stochastic Programming Linear Programming Semidefinite Progr. Nonlinearly Constrained Single Objective Function Bound Constrained Quadratic Programming Network Programming Multiobjective Optimization Continuous Constrained Stochastic Programming Nonlinear Equations Unconstrained Nonlinear Least Squares Global Optimization Nondifferentiable Opt. Algorithms & Datastructures 2 // 2017W // 31
32 SINGLE OBJECTIVE FUNCTION Discrete Optimization some ( mixed integer programming ) or all ( integer programming ) of the variables must be integer values Example Maximum flow in networks (with discrete edge capacities) Special Case: 0/1 integer programming problems e.g.: Knapsack Problem (variables are either 0 (item is not taken) or 1 (item is taken) Travelling Salesman Problem Stochastic Programming Recourse Problems are staged problems wherein one alternates decisions with realizations of stochastic data. The objective is to minimize total expected costs of all decisions. Algorithms & Datastructures 2 // 2017W // 32
33 INTEGER LINEAR PROGRAMMING In many applications, the solution of an optimization problem makes sense only if certain of the unknowns are integers. Integer linear programming problems have the general form min{c T x: Ax = b, x 0, x Z n } where x is the set of n-dimensional integer vectors. Solution Techniques It may not be obvious that integer programming is a very much harder problem than ordinary linear programming, but that is nonetheless the case, in both theory and practice. Branch and Bound Choose non-integer Solution x for Problem Build enumeration tree Left child: subproblem with x i = x i Right child: subproblem with x i = x i Traversal (DFS/BFS) and branching order depends on particular B&B strategy Depth of tree corresponds to dimension of x (number of integer variables), in a leave node all variables must be integer values Branches can be pruned by maintaining lower bounds on the value of the objective function, thus not the whole tree must be expanded (at least in the average case! In the worst case e.g. if a weak bounding function is used all leaves must be visited and the strategy is equal to brute force) Algorithms & Datastructures 2 // 2017W // 33
34 Simulated Annealing
35 CONTINUOUS (GLOBAL) OPTIMIZATION given a continuous cost function F find the minimum/maximum value of F across a defined domain of variables (= solution space) gradient method liability: get stuck in local optimum Algorithms & Datastructures 2 // 2017W // 36
36 ANNEALING avoid gradient method liability: controlled counter-stepping Use probabilistic control for counter-steps, i.e. adopt thermodynamics of the annealing process modeled using the Boltzmann probability distribution: Prob(E) ~ exp(-e/kt) E... possible energy state T... system temperature k... Boltzmann s constant Algorithms & Datastructures 2 // 2017W // 37
37 THE BOLTZMANN DISTRIBUTION Algorithms & Datastructures 2 // 2017W // 38
38 METROPOLIS ALGORITHM [METROPOLIS 53] originally: computer simulation of thermodynamic processes 1. An initial configuration, C i is chosen at random or by some other, usually heuristic method. This configuration has an associated cost, E i = F(C i ). 2. An initial temperature, T, is determined. 3. A random change in the configuration is generated. This change produces a possible new configuration, Ci+1. Energy = objective Function 4. If E i+1 E i, then change is always accepted. 5. If E i+1 > E i, then change is accepted with probability exp[-(e i+1 E i )k T] Probabilistic Step 6. Update temperature T if required. 7. Repeat from 2. until T < T stop cooling schedule stopping criterion Algorithms & Datastructures 2 // 2017W // 39
39 SIMULATED ANNEALING reasonable probability for up-hill stepping at high temperature moderate probability for up-hill stepping at low temperature Algorithms & Datastructures 2 // 2017W // 40
40 SIMULATED ANNEALING ALGORITHM Anneal(I,S) PRE: POST: I = Instance of problem S = initial feasible solution to I of size n > 0 h(s) = objective function, cost of solution S = feasible solution for I Determine initial temperature T repeat tries = moves = goodmoves := 0 repeat NewS := random neighbour of S S := h(news) - h(s) tries := tries + 1 rnd := random number between 0 and 1 if S < 0 or rnd < e -S/T then S := NewS moves := moves + 1 if S < 0 then goodmoves := goodmoves + 1 until goodmoves>n or moves>2*n or tries>20*n T := 0.85*T Until moves < 0.05 tries or T < e Algorithms & Datastructures 2 // 2017W // 41
41 SETTING AN INITIAL TEMPERATURE Temperature(T) POST: T provides about 95% chance of changing solutions select an initial solution S 0 Repeat n times generate a neighboring solution S compute S := h(s 0 ) - h(s) if S > 0 then sum := sum + S set ave = sum/n find a T such that 0.95 < e -ave/t Algorithms & Datastructures 2 // 2017W // 42
42 BRANCH AND BOUND Born in integer programming (IP), B&B is more generally used in global optimization, including continuous functions. Algorithm Sketch: 1. Initialize set list with feasible region (X plus all constraints), say S, with associated bounds -inf, inf (finite bounds on f can be computed, if computationally preferable, and the initial set could be a superset of the feasible region). Initialize f* equal to the lower bound (-inf, unless an initial feasible solution is known). 2. Branch by selecting the k-th set from list, say S_k, and splitting into a partition, say S_k = A\/B such that the interior of A does not intersect the interior of B. (Could partition S_k into more than 2 subsets.) 3. Bound f in A and/or in B, say L(S) <= f(x) <= U(S) for all x in S, for S = A and/or B. Define U(S) = -inf if it is determined that S is empty (in which case S will be discarded in the next step). 4. Test if U(S) <= f* + t, for S = A and/or B. If not, put S on list. (If the test passes, S is discarded by not being put on the list. Note: f* is best solution value to date, and t is a tolerance that says we want to get a solution whose objective value is within t of the optimal value.) Further, if L(S) > f* and the associated point, x(s), is feasible (where f(x(s)) = L(S)), replace f* with L(S) and x* with x(s). If the list is not empty, go branch. B&B terminates when the list becomes empty, and the best feasible solution obtained is x* with value f*, which is within t of optimality. (If f* = -inf, there is no feasible solution.) Algorithms & Datastructures 2 // 2017W // 43
43 BRANCH AND BOUND Notes A particular bounding rule for IP is to solve the LP relaxation, whose value is an upper bound on the integervalued max. The lower bounds do not change unless an integer solution is obtained from the LP, in which case the best solution to date can change. Branching rules can range from breadth-first: expand all possible branches from a node in the search tree before going deeper into the tree, to depth-first: expand the deepest node first (and backtrack, as necessary). A best-first search is to select the node (i.e., set on list) that seems most promising, such as one with the greatest upper bound. Typically, there are multiple criteria for branching, and these change depending on whether a feasible solution has been obtained. Backtracking need not go in exactly the reverse order (i.e., need not have a LIFO rule), but care must be taken when reordering the path. When the problem is not finite (not IP), branching rules need to consider the hyper-volume of each set in a partition to ensure convergence. Algorithms & Datastructures 2 // 2017W // 44
44 BRANCH AND BOUND EXAMPLE 5 4 x 2 Optimum solution (S): x 1 =2,2 x 2 =2,8 x 1 + 2x 2 = 7,73 Consider the following (M)ILP maximize x 1 + 2x 2 s.t. 6x 1 + 5x 2 27 (a) 2x 1 + 6x 2 21 (b) x 1, x 2 0 x 1, x (a) x 1 + 2x 2 (b) x 1 Algorithms & Datastructures 2 // 2017W // 45
45 BRANCH AND BOUND EXAMPLE x 2 Splitting into Partitions Solution S1 x 1 = 2 x 2 = 2,83 x 1 + 2x 2 = 7,67 S1 S Solution = 7,73 x 1 = 2,2 x 2 = 2,8 x 1 2 x 1 3 Solution = 7,67 x 1 = 2 x 2 = 2,83 S2 Solution = 6,6 x 1 = 3 x 2 = 1,8 2 1 S1 S2 (a) x 1 + 2x 2 (b) Solution S2 x 1 = 3 x 2 = 1,8 x 1 + 2x 2 = 6, x 1 Algorithms & Datastructures 2 // 2017W // 46
46 BRANCH AND BOUND EXAMPLE x 2 S4 Solution S4 x 1 = 1,5 x 2 = 3 x 1 + 2x 2 = 7,5 x 2 2 S1 S Solution = 7,73 x 1 = 2,2 x 2 = 2,8 x 1 2 x 1 3 Solution = 7,67 x 1 = 2 x 2 = 2,83 x 2 3 S2 Solution = 6,6 x 1 = 3 x 2 = 1,8 2 1 S3 (a) x 1 + 2x 2 (b) S3 Solution = 6 x 1 = 2 x 2 = 2 S4 Solution = 7,5 x 1 = 1,5 x 2 = x 1 Solution S3 x 1 = 2 x 2 = 2 x 1 + 2x 2 = 6 Algorithms & Datastructures 2 // 2017W // 47
47 BRANCH AND BOUND EXAMPLE x 2 S5 (a) Solution S5 x 1 = 1 x 2 = 3,17 x 1 + 2x 2 = 7,33 x 1 + 2x 2 (b) x 1 x 2 2 S3 S1 S Solution = 7,67 x 1 = 2 x 2 = 2,83 Solution = 7,73 x 1 = 2,2 x 2 = 2,8 x 1 2 x 1 3 Solution = 6 x 1 = 2 x 2 = 2 S5 x 2 3 x 1 1 S4 S2 Solution = 7,33 x 1 = 1 x 2 = 3,17 Solution = 6,6 x 1 = 3 x 2 = 1,8 Solution = 7,5 x 1 = 1,5 x 2 = 3 x 1 2 S6 Algorithms & Datastructures 2 // 2017W // 48
48 BRANCH AND BOUND EXAMPLE x 2 S7 x 2 2 S1 S Solution = 7,73 x 1 = 2,2 x 2 = 2,8 x 1 2 x 1 3 Solution = 7,67 x 1 = 2 x 2 = 2,83 x 2 3 S2 Solution = 6,6 x 1 = 3 x 2 = 1,8 2 1 (a) x 1 + 2x 2 (b) S3 Solution = 6 x 1 = 2 x 2 = 2 x 1 1 S4 Solution = 7,5 x 1 = 1,5 x 2 = 3 x Optimal Solution S7: x 1 = 1 x 2 = 3 x 1 + 2x 2 = 7 x 1 x 2 3 S5 Solution = 7,33 x 1 = 1 x 2 = 3,17 x 2 4 S6 Algorithms & Datastructures 2 // 2017W // 49 S7 Solution = 7 x 1 = 1 x 2 = 3 S8
49 STOCHASTIC PROGRAMMING (DISCRETE) Recourse The fundamental idea behind stochastic linear programming is the concept of recourse. Recourse is the ability to take corrective action after a random event has taken place. A simple example of two-stage recourse is the following: Choose some variables, x, to control what happens today. Overnight, a random event happens. Tomorrow, take some recourse action, y, to correct what may have gotten messed up by the random event. We can formulate optimization problems to choose x and y in an optimal way. In this example, there are two periods; the data for the first period are known with certainty and some data for the future periods are stochastic, that is, random. Stochastic programs seek to minimize the cost of the first-period decision plus the expected cost of the secondperiod recourse decision. This concept can be extended to recourse models with more than two stages. Algorithms & Datastructures 2 // 2017W // 50
50 Algorithms & Data Structures 2 Optimization WS2017 B. Anzengruber-Tanase (Institute for Pervasive Computing, JKU Linz) (Institute for Pervasive Computing, JKU Linz)
CS Algorithms and Complexity
CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationLecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 1: Introduction to optimization Xiaoqun Zhang Shanghai Jiao Tong University Last updated: September 23, 2017 1.1 Introduction 1. Optimization is an important tool in daily life, business and
More informationChapter 5 Linear Programming (LP)
Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationOperations Research Lecture 6: Integer Programming
Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationZebo Peng Embedded Systems Laboratory IDA, Linköping University
TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationLECTURE 13 LECTURE OUTLINE
LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming
More informationIntroduction to Integer Programming
Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationRecoverable Robustness in Scheduling Problems
Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker
More informationResource Constrained Project Scheduling Linear and Integer Programming (1)
DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern
More informationAlgorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University
Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationmin3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.
ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationOutline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationDecomposition-based Methods for Large-scale Discrete Optimization p.1
Decomposition-based Methods for Large-scale Discrete Optimization Matthew V Galati Ted K Ralphs Department of Industrial and Systems Engineering Lehigh University, Bethlehem, PA, USA Départment de Mathématiques
More informationInteger Linear Programming
Integer Linear Programming Solution : cutting planes and Branch and Bound Hugues Talbot Laboratoire CVN April 13, 2018 IP Resolution Gomory s cutting planes Solution branch-and-bound General method Resolution
More informationDeveloping an Algorithm for LP Preamble to Section 3 (Simplex Method)
Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding
More informationStructured Problems and Algorithms
Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More informationMath Models of OR: Some Definitions
Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationmaxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2
ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010
More informationSupplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form
More information15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018
15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following
More informationLecture 23 Branch-and-Bound Algorithm. November 3, 2009
Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal
More informationAlgorithms and Complexity theory
Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory
More informationMVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationSection Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.
Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch
More informationCombinatorial optimization problems
Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationLinear and Integer Programming - ideas
Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature
More informationWeek Cuts, Branch & Bound, and Lagrangean Relaxation
Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationCSC Design and Analysis of Algorithms. LP Shader Electronics Example
CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationComputational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs
Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationAn example of LP problem: Political Elections
Linear Programming An example of LP problem: Political Elections Suppose that you are a politician trying to win an election. Your district has three different types of areas: urban, suburban, and rural.
More informationIn Chapters 3 and 4 we introduced linear programming
SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More information56:270 Final Exam - May
@ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationDuality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More informationDEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions
DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is
More informationMaximum Flow Problem (Ford and Fulkerson, 1956)
Maximum Flow Problem (Ford and Fulkerson, 196) In this problem we find the maximum flow possible in a directed connected network with arc capacities. There is unlimited quantity available in the given
More informationOutline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationGestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA
Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationIE418 Integer Programming
IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 2nd February 2005 Boring Stuff Extra Linux Class: 8AM 11AM, Wednesday February 9. Room??? Accounts and Passwords
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationIntroduction to Linear and Combinatorial Optimization (ADM I)
Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and
More informationOPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.
More informationSolving Mixed-Integer Nonlinear Programs
Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationPart III: Traveling salesman problems
Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationLinear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities
Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:
More informationSection #2: Linear and Integer Programming
Section #2: Linear and Integer Programming Prof. Dr. Sven Seuken 8.3.2012 (with most slides borrowed from David Parkes) Housekeeping Game Theory homework submitted? HW-00 and HW-01 returned Feedback on
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationHandout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 1: Introduction to Dynamic Programming Instructor: Shiqian Ma January 6, 2014 Suggested Reading: Sections 1.1 1.5 of Chapter
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms
MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationCSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results
CSE541 Class 22 Jeremy Buhler November 22, 2016 Today: how to generalize some well-known approximation results 1 Intuition: Behavior of Functions Consider a real-valued function gz) on integers or reals).
More informationAM 121: Intro to Optimization Models and Methods Fall 2018
AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More information1 Column Generation and the Cutting Stock Problem
1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when
More informationLesson 27 Linear Programming; The Simplex Method
Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More information