Günter Rote. Freie Universität Berlin

Size: px
Start display at page:

Download "Günter Rote. Freie Universität Berlin"

Transcription

1 Algorithms for Isotonic Regression a i Günter Rote Freie Universität Berlin i

2 Algorithms for Isotonic Regression a i x i Günter Rote Freie Universität Berlin Minimize n i=1 i h( x i a i ) subject to x 1 x n

3 Minimize n i=1 Objective functions h( x i a i ) subject to x 1 x n h(z) = z: L 1 -regression h(z) = z 2 : L 2 -regression n i=1 x i a i min n i=1 (x i a i ) 2 min h(z) = z p, p : L -regression versions with weights w i > 0: n n w i x i a i, w i (x i a i ) 2, i=1 General form: n i=1 i=1 h i (x i ) min max x i a i min 1 i n max w i x i a i, 1 i n / ( ) h i convex and piecewise simple max h i(x i ) min 1 i n

4 Overview The classical Pool Adjacent Violators (PAV) algorithm dynamic programming More general constraints: with a given partial order x i x j for i j In particular, L max regression with a d-dimensional partial order Randomized optimization technique of Timothy Chan (1998)

5 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n a i x i i

6 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i i

7 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i i

8 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 i

9 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve h( x a 10 ) + h( x a 11 ) min i

10 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve h( x a 10 ) + h( x a 11 ) min i

11 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

12 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

13 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

14 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

15 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

16 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

17 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

18 Pool Adjacent Violators (PAV) Minimize n i=1 h( x i a i ) subject to x 1 x n 1. Relax all x i x i+1 a i x i 2. Find adjacent violators x i > x i+1 3. Pool them: x i = x i+1, and solve 4. Repeat from step 2. i

19 min s i t min s i t x = w i x a i w i (x a i ) 2 s i t s i t w i a i w i Subproblems = x = weighted median of a s,..., a t = x = weighted mean of a s,..., a t in O(1) time, after O(n) preprocessing Weighted isotonic L 2 regression is solvable in O(n) time.

20 min s i t w i x a i Subproblems = x = weighted median of a s,..., a t Ahuja and Orlin [2001]: O(n log n) algorithm based on PAV and scaling: Solve the problem for scaled (integer) data ā i := 2 a i /2. Solution for original data a i can be recovered in O(n) time. We can assume a i {1, 2,..., n}, after sorting. Quentin Stout [2008]: O(n log n) PAV implementation median queries by mergeable trees (2-3-trees, AVL trees) extended with weight information Rote [2012]: O(n log n) by dynamic programming. A priority queue is sufficient

21 { k f k (z) := min Rekursion: Dynamic programming i=1 w i x i a i : x 1 x 2 x k = z f k (z) := min{ f k 1 (x) : x z } + w k z a k } k = 0, 1,..., n; z R

22 { k f k (z) := min Rekursion: Dynamic programming i=1 w i x i a i : x 1 x 2 x k = z f k (z) := min{ f k 1 (x) : x z } + w k z a k }{{} g k 1 (z) } k = 0, 1,..., n; z R Transform f k 1 g k 1 f k

23 Recursion step 1 Transform f k 1 to g k 1 (z) := min{ f k 1 (x) : x z } y f k 1 (z) z

24 Recursion step 1 Transform f k 1 to g k 1 (z) := min{ f k 1 (x) : x z } y f k 1 (z) p k 1 g k 1(z) z Remove the increasing parts right of the minimum p k 1 and replace them by a horizontal part.

25 Recursion step 2 Transform g k 1 to f k (z) = g k 1 (z) + w k z a k Add two convex piecewise-linear functions

26 Recursion step 2 Transform g k 1 to f k (z) = g k 1 (z) + w k z a k Add two convex piecewise-linear functions Lemma. f k is a piecewise-linear convex function. The breakpoints are located at a subset of the points a i. The leftmost piece has slope k i=1 w i. The rightmost piece has slope w k.

27 y = f(x) Piecewise-linear functions y = s x + t y = s x + t y = sx + t x 0 f has a breakpoint at position x 0 with value s s. x Represent f by the rightmost slope s and the set of breakpoints (position+value). (The rightmost intercept t is not needed.)

28 Piecewise-linear functions y = f(x) y = s x + t y = s x + t y = sx + t x 0 Transformation f k 1 g k 1 : while s (value of rightmost breakpoint) 0: remove rightmost breakpoint update s update s to 0 x

29 Piecewise-linear functions y = f(x) y = s x + t y = s x + t y = sx + t x 0 Transformation g k 1 f k (z) = g k 1 (z) + w k z a k add w k to s. add a breakpoint at position a k with value 2w k. x

30 Piecewise-linear functions y = f(x) y = s x + t y = s x + t y = sx + t x 0 Transformation g k 1 f k (z) = g k 1 (z) + w k z a k add w k to s. add a breakpoint at position a k with value 2w k. x

31 Piecewise-linear functions y = f(x) y = s x + t y = s x + t y = sx + t x 0 Transformation f k 1 g k 1 : while s (value of rightmost breakpoint) 0: remove rightmost breakpoint update s update s to 0 x

32 Piecewise-linear functions y = f(x) y = s x + t y = s x + t y = sx + t x 0 Transformation f k 1 g k 1 : while s (value of rightmost breakpoint) 0: remove rightmost breakpoint update s update s to 0 x

33 y = f(x) Piecewise-linear functions Only access to the rightmost breakpoint is required. y = s priority x + t queue ordered by position y = s x + t y = sx + t x 0 Transformation f k 1 g k 1 : while s (value of rightmost breakpoint) 0: remove rightmost breakpoint update s update s to 0 x

34 The algorithm Q := ; // priority queue of breakpoints ordered by the key position; s := 0; for k = 1,..., n do // On entry, Q and s represents g k 1. Q.add(new breakpoint with position := a k, value := 2w k ); s := s + w k ; // We have computed f k. while loop true do B := Q.findmax; // rightmost breakpoint B if s B.value < 0 then exit loop; ; s := s B.value; Q.deletemax; p k := B.position; B.value := B.value s; s := 0; // We have computed g k. // Compute the optimal solution x 1,..., x n backwards: x n := p n ; for k = n 1, n 2,..., 1 do x k := min{x k+1, p k }; g k 1 f k g k

35 General objective functions Minimize n i=1 h i (x i ) Each h i is convex and piecewise simple : Summation of pieces in constant time Minimum on a sum of pieces in constant time Examples: L 3 norm: h i (x) = { (x a i ) 3, x a i (x a i ) 3 O(n log n) time, x a i L 4 norm: h i (x) = (x a i ) 4 O(n) time (no breakpoints! A stack suffices.) Linear + sinusoidal pieces: w i cos(x a i )

36 General partial orders Minimize n subject to i=1 h i (x i ) or max 1 i n h i(x i ) for a given partial order. x i x j for i j x 4 x 5 x 1 x 4 x 6 x 2 x 7 PAV can be extended to tree-like partial orders. weighted L 1 -regression for a DAG with m edges in O(nm + n 2 log n) time. [Angelov, Harb, Kannan, and Wang, SODA 2006]... and many other results

37 General partial orders Minimize n subject to i=1 h i (x i ) or max 1 i n h i(x i ) for a given partial order. x i x j for i j x 4 x 5 x 1 x 4 x 6 x 2 x 7 PAV can be extended to tree-like partial orders. weighted L 1 -regression for a DAG with m edges in O(nm + n 2 log n) time. [Angelov, Harb, Kannan, and Wang, SODA 2006]... and many other results

38 Weighted L regression Minimize z := max 1 i n h i(x i ) subject to x i x j for i j. h i (x) = w i x a i x or h i (x) = any function which increases from a minimum into both directions x

39 Weighted L regression Minimize z := max 1 i n h i(x i ) subject to x i x j for i j. h i (x) = w i x a i ε l i u i x or h i (x) = any function which increases from a minimum into both directions ε l i u i x h i (x) ε l i (ε) x u i (ε)

40 Checking feasibility Is the optimum value z ε? [ z = max i h i (x i ) ] Find values x i with l i (ε) x i u i (ε) for all i, and (1) x i x j for i j. (2) Find the smallest values x i = x low i with l i (ε) x i for all i, and Result: x low j x i x j for i j. x low j = max{l j, max{ l i i j }} Calculate in topological order: := max{l i, max{ x low i i predecessor of j }} z ε iff x low i u i for all i = O(m + n) time

41 Characterizing feasibility z ε iff for every pair i, j with i j: l i (ε) u j (ε) Theorem. Define for every pair i, j: z ij := min{ ε l i (ε) u j (ε) } h j z ij h i x Then z = max{ z ij i j }.

42 d-dimensional orders n points p = (p 1, p 2,..., p d ) R d Product order (domination order): p q (p 1 q 1 ) (p 2 q 2 ) (p d q d ) p 2 q p p 1

43 d-dimensional orders n points p = (p 1, p 2,..., p d ) R d Product order (domination order): p q (p 1 q 1 ) (p 2 q 2 ) (p d q d ) p 2 p 2 q p p 1 p 1 The digraph of the order is not explicitly given. It might have quadratic size.

44 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension.

45 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension.

46 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension.

47 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension.

48 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension.

49 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension. Recurse in each half. O(n log n) in 2 dimensions. small Manhattan networks [ Gudmundsson, Klein, Knauer, and Smid 2007 ] O(n log n) is optimal in 2 dimensions: Horton sets

50 L regression for d-dimensional orders Stout [2011]: O(n log d 1 n) space and O(n log d n) time Embed the order in a DAG with more vertices but fewer edges. divide-and-conquer strategy, recursive in the dimension. Recurse in each half. O(n log n) in 2 dimensions. small Manhattan networks [ Gudmundsson, Klein, Knauer, and Smid 2007 ] O(n log n) is optimal in 2 dimensions: Horton sets Rote [2013]: O(n) space and O(n log d 1 n) expected time

51 L regression for d-dimensional orders Rote [2013]: O(n) space and O(n log d 1 n) expected time feasibility checking without explicitly constructing the DAG randomized optimization by Timothy Chan s technique (saves a log-factor.) x low j = max{l j, max{ l i p i p j }} More general operation update(a, B): (A, B {1,..., n}) for all j B: x new j = max{x old j, max{ x old i i A, p i p j }} equivalent formulation: for all i A, j B (in any order): if p i p j then set x j := max{x j, x i }. B A

52 Partitioning the update operation d A + B + procedure update(a, B): Split A B along the last coordinate; update(a, B ); update(a, B + ); update(a +, B ); update(a +, B + ); A B 1, 2,..., d 1

53 Partitioning the update operation d A + B + procedure update(a, B): Split A B along the last coordinate; update(a, B ); update(a, B + ); update(a +, B ); update(a +, B + ); A B 1, 2,..., d 1

54 Partitioning the update operation d A + B + A procedure update(a, B): Split A B along the last coordinate; update(a, B ); update(a, B + ); update(a +, B ); update(a +, B + ); a (d 1)-dimensional order! B 1, 2,..., d 1

55 Partitioning the update operation d A + B + A procedure update(a, B): Split A B along the last coordinate; update(a, B ); update(a, B + ); update(a +, B ); update(a +, B + ); a (d 1)-dimensional order! B 1, 2,..., d 1

56 Partitioning the update operation d A + B + A B 1, 2,..., d 1 procedure update(a, B): Split A B along the last coordinate; update(a, B ); update(a, B + ); update(a +, B ); update(a +, B + ); a (d 1)-dimensional order! procedure update k (A, B): Split A B along the k-th coordinate; update k (A, B ); update k 1 (A, B + ); update k (A +, B + );

57 Partitioning the update operation (2) procedure update k (A, B): Split A B into two equal parts along the k-th coordinate; update k (A, B ); update k 1 (A, B + ); update k (A +, B + ); Initially sort along all coordinates in O(n log n) time. Splitting takes linear time. Initial call: update d (P, P ) with P = {1,..., n} Base case: update 1 (A, B) is a linear scan. Takes O(n) time. Induction: update k (A, B) in O(n log k 1 n) time. (n = A B ) Remark: update d (A, B) is always called with A = B. update k (A, B) for k < d is always called with A B =.

58 Randomized optimization technique The problem is decomposable: z = max{ z ij i j } Define z(p ) := max{ z ij i, j P, i j } If P = P 1 P 2 P 3, then z(p ) = max{z(p 1 P 2 ), z(p 1 P 3 ), z(p 2 P 3 )} (Similar problems: Diameter, closest pair) We can check feasibility: Is z(p ) ε? Lemma. (Chan 1998) = The solution can be computed in the same expected time as checking feasibility.

59 Randomized maximization Permute subproblems S 1, S 2,..., S r into random order. z := ; for k = 1,..., r do if z(s k ) > z then z := z(s k ); ( ) ( ) Proposition. The test ( ) is executed r times, and the computation ( ) is executed in expectation at most times r = H r 1 + ln r

60 L regression in d dimensions Partition P into 10 equal subsets P 1 P 10 (for example along the d-axis). Form 45 subproblems (P i, P j ) for 1 i j 10 and permute them into random order. z := ; for k = 1,..., 45 do Let (P i, P j ) the k-th subproblem. Feasibility check: Is z(p i P j ) z? ( ) if not then compute z(p i P j ) recursively ( ) and set z := z(p i P j ); T (n) = O(FEAS(n)) + H 45 T (2n/10) O(n log d 1 n) T (n/5) = O(n log d 1 n)

Faster Algorithms for some Optimization Problems on Collinear Points

Faster Algorithms for some Optimization Problems on Collinear Points Faster Algorithms for some Optimization Problems on Collinear Points Ahmad Biniaz Prosenjit Bose Paz Carmi Anil Maheshwari J. Ian Munro Michiel Smid June 29, 2018 Abstract We propose faster algorithms

More information

Counting and Enumeration in Combinatorial Geometry Günter Rote

Counting and Enumeration in Combinatorial Geometry Günter Rote Counting and Enumeration in Combinatorial Geometry Günter Rote Freie Universität Berlin General position: No three points on a line two triangulations Counting and Enumeration in enumeration counting and

More information

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps CS 374: Algorithms & Models of Computation, Spring 207 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 8 March 28, 207 Chandra Chekuri (UIUC) CS374 Spring 207 / 56 Part I Shortest Paths

More information

Advances in processor, memory, and communication technologies

Advances in processor, memory, and communication technologies Discrete and continuous min-energy schedules for variable voltage processors Minming Li, Andrew C. Yao, and Frances F. Yao Department of Computer Sciences and Technology and Center for Advanced Study,

More information

Linear Time Isotonic and Unimodal Regression in the L 1 and L Norms

Linear Time Isotonic and Unimodal Regression in the L 1 and L Norms Linear Time Isotonic and Unimodal Regression in the L 1 and L Norms Victor Boyarshinov Rm 207 Lally Building, CS, RPI, 110 8th Street, Troy, NY 12180. boyarv@cs.rpi.edu Malik Magdon-Ismail Rm 207 Lally

More information

Computing a Minimum-Dilation Spanning Tree is NP-hard

Computing a Minimum-Dilation Spanning Tree is NP-hard Computing a Minimum-Dilation Spanning Tree is NP-hard Otfried Cheong 1 Herman Haverkort 2 Mira Lee 1 October 0, 2008 Abstract In a geometric network G = (S, E), the graph distance between two vertices

More information

Lecture 2: Divide and conquer and Dynamic programming

Lecture 2: Divide and conquer and Dynamic programming Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms T. M. Murali February 19, 2013 Divide and Conquer Break up a problem into several parts. Solve each part recursively. Solve base cases by brute force. Efficiently combine

More information

Weighted Activity Selection

Weighted Activity Selection Weighted Activity Selection Problem This problem is a generalization of the activity selection problem that we solvd with a greedy algorithm. Given a set of activities A = {[l, r ], [l, r ],..., [l n,

More information

Succinct Data Structures for Approximating Convex Functions with Applications

Succinct Data Structures for Approximating Convex Functions with Applications Succinct Data Structures for Approximating Convex Functions with Applications Prosenjit Bose, 1 Luc Devroye and Pat Morin 1 1 School of Computer Science, Carleton University, Ottawa, Canada, K1S 5B6, {jit,morin}@cs.carleton.ca

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Discrete (and Continuous) Optimization WI4 131

Discrete (and Continuous) Optimization WI4 131 Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts Alexander Ageev Refael Hassin Maxim Sviridenko Abstract Given a directed graph G and an edge weight function w : E(G) R +, themaximumdirectedcutproblem(max

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Spring 2017-2018 Outline 1 Sorting Algorithms (contd.) Outline Sorting Algorithms (contd.) 1 Sorting Algorithms (contd.) Analysis of Quicksort Time to sort array of length

More information

Divide-Conquer-Glue Algorithms

Divide-Conquer-Glue Algorithms Divide-Conquer-Glue Algorithms Mergesort and Counting Inversions Tyler Moore CSE 3353, SMU, Dallas, TX Lecture 10 Divide-and-conquer. Divide up problem into several subproblems. Solve each subproblem recursively.

More information

APTAS for Bin Packing

APTAS for Bin Packing APTAS for Bin Packing Bin Packing has an asymptotic PTAS (APTAS) [de la Vega and Leuker, 1980] For every fixed ε > 0 algorithm outputs a solution of size (1+ε)OPT + 1 in time polynomial in n APTAS for

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms T. M. Murali March 17, 2014 Divide and Conquer Break up a problem into several parts. Solve each part recursively. Solve base cases by brute force. Efficiently combine solutions

More information

Randomization in Algorithms and Data Structures

Randomization in Algorithms and Data Structures Randomization in Algorithms and Data Structures 3 lectures (Tuesdays 14:15-16:00) May 1: Gerth Stølting Brodal May 8: Kasper Green Larsen May 15: Peyman Afshani For each lecture one handin exercise deadline

More information

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps CS 374: Algorithms & Models of Computation, Fall 205 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 7 October 22, 205 Chandra & Manoj (UIUC) CS374 Fall 205 / 54 Part I Shortest Paths with

More information

Lexicographic Flow. Dexter Kozen Department of Computer Science Cornell University Ithaca, New York , USA. June 25, 2009

Lexicographic Flow. Dexter Kozen Department of Computer Science Cornell University Ithaca, New York , USA. June 25, 2009 Lexicographic Flow Dexter Kozen Department of Computer Science Cornell University Ithaca, New York 14853-7501, USA June 25, 2009 Abstract The lexicographic flow problem is a flow problem in which the edges

More information

Covering Linear Orders with Posets

Covering Linear Orders with Posets Covering Linear Orders with Posets Proceso L. Fernandez, Lenwood S. Heath, Naren Ramakrishnan, and John Paul C. Vergara Department of Information Systems and Computer Science, Ateneo de Manila University,

More information

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting Summary so far z =max{c T x : Ax apple b, x 2 Z n +} I Modeling with IP (and MIP, and BIP) problems I Formulation for a discrete set that is a feasible region of an IP I Alternative formulations for the

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Divide and Conquer Problem Solving Method

Divide and Conquer Problem Solving Method Divide and Conquer Problem Solving Method 1. Problem Instances Let P be a problem that is amenable to the divide and conquer algorithm design method and let P 0, P 1, P 2, be distinct instances of the

More information

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive

More information

Decomposing Isotonic Regression for Efficiently Solving Large Problems

Decomposing Isotonic Regression for Efficiently Solving Large Problems Decomposing Isotonic Regression for Efficiently Solving Large Problems Ronny Luss Dept. of Statistics and OR Tel Aviv University ronnyluss@gmail.com Saharon Rosset Dept. of Statistics and OR Tel Aviv University

More information

1 The Local-to-Global Lemma

1 The Local-to-Global Lemma Point-Set Topology Connectedness: Lecture 2 1 The Local-to-Global Lemma In the world of advanced mathematics, we are often interested in comparing the local properties of a space to its global properties.

More information

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide-and-conquer: Order Statistics. Curs: Fall 2017 Divide-and-conquer: Order Statistics Curs: Fall 2017 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers.

More information

Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points

Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points Timothy M. Chan Zahed Rahmati Abstract Given a set of n moving points in R d, where each point moves

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk CMPS 6610 Fall 018 Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk Paths in graphs Consider a digraph G = (V, E) with an edge-weight function w

More information

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Matroids Shortest Paths Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Marc Uetz University of Twente m.uetz@utwente.nl Lecture 2: sheet 1 / 25 Marc Uetz Discrete Optimization Matroids

More information

Introduction to Integer Programming

Introduction to Integer Programming Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity

More information

Single Source Shortest Paths

Single Source Shortest Paths CMPS 00 Fall 017 Single Source Shortest Paths Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk Paths in graphs Consider a digraph G = (V, E) with an edge-weight

More information

With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences

With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences L p -Testing With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences http://grigory.us Property Testing [Goldreich, Goldwasser, Ron; Rubinfeld,

More information

IEOR 290G. Contents. Professor Dorit S. Hochbaum. 1 Overview of Complexity Definitions Examples Nonlinear Optimization 2

IEOR 290G. Contents. Professor Dorit S. Hochbaum. 1 Overview of Complexity Definitions Examples Nonlinear Optimization 2 IEOR 90G Professor Dorit S. Hochbaum Contents 1 Overview of Complexity 1 1.1 Definitions.......................................... 1 1. Examples.......................................... 1 Nonlinear Optimization.1

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

A Simple Linear Space Algorithm for Computing a Longest Common Increasing Subsequence

A Simple Linear Space Algorithm for Computing a Longest Common Increasing Subsequence A Simple Linear Space Algorithm for Computing a Longest Common Increasing Subsequence Danlin Cai, Daxin Zhu, Lei Wang, and Xiaodong Wang Abstract This paper presents a linear space algorithm for finding

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 6 Greedy Algorithms Interval Scheduling Interval Partitioning Scheduling to Minimize Lateness Sofya Raskhodnikova S. Raskhodnikova; based on slides by E. Demaine,

More information

The partial inverse minimum cut problem with L 1 -norm is strongly NP-hard. Elisabeth Gassner

The partial inverse minimum cut problem with L 1 -norm is strongly NP-hard. Elisabeth Gassner FoSP Algorithmen & mathematische Modellierung FoSP Forschungsschwerpunkt Algorithmen und mathematische Modellierung The partial inverse minimum cut problem with L 1 -norm is strongly NP-hard Elisabeth

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Generating p-extremal graphs

Generating p-extremal graphs Generating p-extremal graphs Derrick Stolee Department of Mathematics Department of Computer Science University of Nebraska Lincoln s-dstolee1@math.unl.edu August 2, 2011 Abstract Let f(n, p be the maximum

More information

Development of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks

Development of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks Development of an algorithm for solving mixed integer and nonconvex problems arising in electrical supply networks E. Wanufelle 1 S. Leyffer 2 A. Sartenaer 1 Ph. Toint 1 1 FUNDP, University of Namur 2

More information

Analysis of Algorithms - Using Asymptotic Bounds -

Analysis of Algorithms - Using Asymptotic Bounds - Analysis of Algorithms - Using Asymptotic Bounds - Andreas Ermedahl MRTC (Mälardalens Real-Time Research Center) andreas.ermedahl@mdh.se Autumn 004 Rehersal: Asymptotic bounds Gives running time bounds

More information

Discrete Structures May 13, (40 points) Define each of the following terms:

Discrete Structures May 13, (40 points) Define each of the following terms: Discrete Structures May 13, 2014 Prof Feighn Final Exam 1. (40 points) Define each of the following terms: (a) (binary) relation: A binary relation from a set X to a set Y is a subset of X Y. (b) function:

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

5. DIVIDE AND CONQUER I

5. DIVIDE AND CONQUER I 5. DIVIDE AND CONQUER I mergesort counting inversions closest pair of points randomized quicksort median and selection Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Lecture 18 April 26, 2012

Lecture 18 April 26, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 18 April 26, 2012 1 Overview In the last lecture we introduced the concept of implicit, succinct, and compact data structures, and

More information

Solving LP and MIP Models with Piecewise Linear Objective Functions

Solving LP and MIP Models with Piecewise Linear Objective Functions Solving LP and MIP Models with Piecewise Linear Obective Functions Zonghao Gu Gurobi Optimization Inc. Columbus, July 23, 2014 Overview } Introduction } Piecewise linear (PWL) function Convex and convex

More information

Branch-and-Bound for the Travelling Salesman Problem

Branch-and-Bound for the Travelling Salesman Problem Branch-and-Bound for the Travelling Salesman Problem Leo Liberti LIX, École Polytechnique, F-91128 Palaiseau, France Email:liberti@lix.polytechnique.fr March 15, 2011 Contents 1 The setting 1 1.1 Graphs...............................................

More information

arxiv: v1 [cs.ds] 9 Apr 2018

arxiv: v1 [cs.ds] 9 Apr 2018 From Regular Expression Matching to Parsing Philip Bille Technical University of Denmark phbi@dtu.dk Inge Li Gørtz Technical University of Denmark inge@dtu.dk arxiv:1804.02906v1 [cs.ds] 9 Apr 2018 Abstract

More information

The Maximum Flow Problem with Disjunctive Constraints

The Maximum Flow Problem with Disjunctive Constraints The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer

More information

arxiv: v2 [cs.ds] 15 Jul 2016

arxiv: v2 [cs.ds] 15 Jul 2016 Improved Bounds for Shortest Paths in Dense Distance Graphs Paweł Gawrychowski and Adam Karczmarz University of Haifa, Israel gawry@gmail.com Institute of Informatics, University of Warsaw, Poland a.karczmarz@mimuw.edu.pl

More information

ZFC INDEPENDENCE AND SUBSET SUM

ZFC INDEPENDENCE AND SUBSET SUM ZFC INDEPENDENCE AND SUBSET SUM S. GILL WILLIAMSON Abstract. Let Z be the integers and N the nonnegative integers. and let G = (N k, Θ) be a max-downward digraph. We study sets of functions H = {h ρ D

More information

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Breadth First Search, Dijkstra s Algorithm for Shortest Paths CS 374: Algorithms & Models of Computation, Spring 2017 Breadth First Search, Dijkstra s Algorithm for Shortest Paths Lecture 17 March 1, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 Part I Breadth

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET

THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET THEORY OF COMPUTATION (AUBER) EXAM CRIB SHEET Regular Languages and FA A language is a set of strings over a finite alphabet Σ. All languages are finite or countably infinite. The set of all languages

More information

T U M I N S T I T U T F Ü R I N F O R M A T I K. Binary Search on Two-Dimensional Data. Riko Jacob

T U M I N S T I T U T F Ü R I N F O R M A T I K. Binary Search on Two-Dimensional Data. Riko Jacob T U M I N S T I T U T F Ü R I N F O R M A T I K Binary Search on Two-Dimensional Data Riko Jacob ABCDE FGHIJ KLMNO TUM-I08 21 Juni 08 T E C H N I S C H E U N I V E R S I T Ä T M Ü N C H E N TUM-INFO-06-I0821-0/1.-FI

More information

CPS 616 DIVIDE-AND-CONQUER 6-1

CPS 616 DIVIDE-AND-CONQUER 6-1 CPS 616 DIVIDE-AND-CONQUER 6-1 DIVIDE-AND-CONQUER Approach 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original (larger)

More information

Correcting statistical models via empirical distribution functions

Correcting statistical models via empirical distribution functions Comput Stat (2016) 31:465 495 DOI 10.1007/s00180-015-0607-5 ORIGINAL PAPER Correcting statistical models via empirical distribution functions Alexander Munteanu 1 Max Wornowizki 2 Received: 3 October 2014

More information

Graphs with few total dominating sets

Graphs with few total dominating sets Graphs with few total dominating sets Marcin Krzywkowski marcin.krzywkowski@gmail.com Stephan Wagner swagner@sun.ac.za Abstract We give a lower bound for the number of total dominating sets of a graph

More information

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count Types of formulas for basic operation count Exact formula e.g., C(n) = n(n-1)/2 Algorithms, Design and Analysis Big-Oh analysis, Brute Force, Divide and conquer intro Formula indicating order of growth

More information

COMP 633: Parallel Computing Fall 2018 Written Assignment 1: Sample Solutions

COMP 633: Parallel Computing Fall 2018 Written Assignment 1: Sample Solutions COMP 633: Parallel Computing Fall 2018 Written Assignment 1: Sample Solutions September 12, 2018 I. The Work-Time W-T presentation of EREW sequence reduction Algorithm 2 in the PRAM handout has work complexity

More information

5. DIVIDE AND CONQUER I

5. DIVIDE AND CONQUER I 5. DIVIDE AND CONQUER I mergesort counting inversions closest pair of points median and selection Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals

More information

No EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS. By E. den Boef, D. den Hertog. May 2004 ISSN

No EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS. By E. den Boef, D. den Hertog. May 2004 ISSN No. 4 5 EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS y E. den oef, D. den Hertog May 4 ISSN 94-785 Efficient Line Searching for Convex Functions Edgar den oef Dick den Hertog 3 Philips Research Laboratories,

More information

Dissecting a Square into an Odd Number of Triangles of Almost Equal Area

Dissecting a Square into an Odd Number of Triangles of Almost Equal Area Dissecting a Square into an Odd Number of Triangles of Almost Equal Area Jean-Philippe Labbé, Günter Rote, Günter M. Ziegler Freie Universität Berlin 0.1112 0.1106 0.1108 0.1110 0.1113 0.1116 0.1113 0.1110

More information

arxiv: v2 [cs.cg] 10 Apr 2018

arxiv: v2 [cs.cg] 10 Apr 2018 Fast Fréchet Distance Between Curves with Long Edges Joachim Gudmundsson 1, Majid Mirzanezhad 2, Ali Mohades 3, and Carola Wenk 2 1 University of Sydney, Australia, joachim.gudmundsson@sydney.edu.au 2

More information

Lecture 21 November 11, 2014

Lecture 21 November 11, 2014 CS 224: Advanced Algorithms Fall 2-14 Prof. Jelani Nelson Lecture 21 November 11, 2014 Scribe: Nithin Tumma 1 Overview In the previous lecture we finished covering the multiplicative weights method and

More information

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 2200 Fall 2017 Divide-and-Conquer Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 1 The divide-and-conquer design paradigm 1. Divide the problem (instance)

More information

Inverse Median Location Problems with Variable Coordinates. Fahimeh Baroughi Bonab, Rainer Ernst Burkard, and Behrooz Alizadeh

Inverse Median Location Problems with Variable Coordinates. Fahimeh Baroughi Bonab, Rainer Ernst Burkard, and Behrooz Alizadeh FoSP Algorithmen & mathematische Modellierung FoSP Forschungsschwerpunkt Algorithmen und mathematische Modellierung Inverse Median Location Problems with Variable Coordinates Fahimeh Baroughi Bonab, Rainer

More information

Structured Variational Inference

Structured Variational Inference Structured Variational Inference Sargur srihari@cedar.buffalo.edu 1 Topics 1. Structured Variational Approximations 1. The Mean Field Approximation 1. The Mean Field Energy 2. Maximizing the energy functional:

More information

The Knapsack Problem. 28. April /44

The Knapsack Problem. 28. April /44 The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April

More information

Phylogenetic Networks, Trees, and Clusters

Phylogenetic Networks, Trees, and Clusters Phylogenetic Networks, Trees, and Clusters Luay Nakhleh 1 and Li-San Wang 2 1 Department of Computer Science Rice University Houston, TX 77005, USA nakhleh@cs.rice.edu 2 Department of Biology University

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Self-improving Algorithms for Coordinate-Wise Maxima and Convex Hulls

Self-improving Algorithms for Coordinate-Wise Maxima and Convex Hulls Self-improving Algorithms for Coordinate-Wise Maxima and Convex Hulls Kenneth L. Clarkson Wolfgang Mulzer C. Seshadhri November 1, 2012 Abstract Computing the coordinate-wise maxima and convex hull of

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Lecture 8: Equivalence Relations

Lecture 8: Equivalence Relations Lecture 8: Equivalence Relations 1 Equivalence Relations Next interesting relation we will study is equivalence relation. Definition 1.1 (Equivalence Relation). Let A be a set and let be a relation on

More information

A MODEL FOR THE INVERSE 1-MEDIAN PROBLEM ON TREES UNDER UNCERTAIN COSTS. Kien Trung Nguyen and Nguyen Thi Linh Chi

A MODEL FOR THE INVERSE 1-MEDIAN PROBLEM ON TREES UNDER UNCERTAIN COSTS. Kien Trung Nguyen and Nguyen Thi Linh Chi Opuscula Math. 36, no. 4 (2016), 513 523 http://dx.doi.org/10.7494/opmath.2016.36.4.513 Opuscula Mathematica A MODEL FOR THE INVERSE 1-MEDIAN PROBLEM ON TREES UNDER UNCERTAIN COSTS Kien Trung Nguyen and

More information

The Principle of Optimality

The Principle of Optimality The Principle of Optimality Sequence Problem and Recursive Problem Sequence problem: Notation: V (x 0 ) sup {x t} β t F (x t, x t+ ) s.t. x t+ Γ (x t ) x 0 given t () Plans: x = {x t } Continuation plans

More information

Review Of Topics. Review: Induction

Review Of Topics. Review: Induction Review Of Topics Asymptotic notation Solving recurrences Sorting algorithms Insertion sort Merge sort Heap sort Quick sort Counting sort Radix sort Medians/order statistics Randomized algorithm Worst-case

More information

Using Sparsity to Design Primal Heuristics for MILPs: Two Stories

Using Sparsity to Design Primal Heuristics for MILPs: Two Stories for MILPs: Two Stories Santanu S. Dey Joint work with: Andres Iroume, Marco Molinaro, Domenico Salvagnin, Qianyi Wang MIP Workshop, 2017 Sparsity in real" Integer Programs (IPs) Real" IPs are sparse: The

More information

Quantum Algorithms for Graph Traversals and Related Problems

Quantum Algorithms for Graph Traversals and Related Problems Quantum Algorithms for Graph Traversals and Related Problems Sebastian Dörn Institut für Theoretische Informatik, Universität Ulm, 89069 Ulm, Germany sebastian.doern@uni-ulm.de Abstract. We study the complexity

More information

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n Aside: Golden Ratio Golden Ratio: A universal law. Golden ratio φ = lim n a n+b n a n = 1+ 5 2 a n+1 = a n + b n, b n = a n 1 Ruta (UIUC) CS473 1 Spring 2018 1 / 41 CS 473: Algorithms, Spring 2018 Dynamic

More information

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12 Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following

More information

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms Taking Stock IE170: Algorithms in Systems Engineering: Lecture 3 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University January 19, 2007 Last Time Lots of funky math Playing

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 5: Divide and Conquer (Part 2) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ A Lower Bound on Convex Hull Lecture 4 Task: sort the

More information

RECAP How to find a maximum matching?

RECAP How to find a maximum matching? RECAP How to find a maximum matching? First characterize maximum matchings A maximal matching cannot be enlarged by adding another edge. A maximum matching of G is one of maximum size. Example. Maximum

More information

MAKING A BINARY HEAP

MAKING A BINARY HEAP CSE 101 Algorithm Design and Analysis Miles Jones mej016@eng.uc sd.edu Office 4208 CSE Building Lecture 19: Divide and Conquer Design examples/dynamic Programming MAKING A BINARY HEAP Base case. Break

More information