Complexity Theory of Polynomial-Time Problems

Similar documents
Complexity Theory of Polynomial-Time Problems

Complexity Theory of Polynomial-Time Problems

Subcubic Equivalence of Triangle Detection and Matrix Multiplication

Approaches to bounding the exponent of matrix multiplication

Complexity Theory of Polynomial-Time Problems

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

Connections between exponential time and polynomial time problem Lecture Notes for CS294

Complexity of Matrix Multiplication and Bilinear Problems

How to find good starting tensors for matrix multiplication

Matching Triangles and Basing Hardness on an Extremely Popular Conjecture

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

NP-Complete Problems. More reductions

CS311 Computational Structures. NP-completeness. Lecture 18. Andrew P. Black Andrew Tolmach. Thursday, 2 December 2010

A FINE-GRAINED APPROACH TO ALGORITHMS AND COMPLEXITY. Virginia Vassilevska Williams Stanford University

Powers of Tensors and Fast Matrix Multiplication

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

BBM402-Lecture 11: The Class NP

Graph coloring, perfect graphs

CS367 Lecture 3 (old lectures 5-6) APSP variants: Node Weights, Earliest Arrivals, Bottlenecks Scribe: Vaggos Chatziafratis Date: October 09, 2015

Speeding up the Four Russians Algorithm by About One More Logarithmic Factor

Complexity (Pre Lecture)

Hardness for easy problems. An introduction

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Ma/CS 6b Class 25: Error Correcting Codes 2

Show that the following problems are NP-complete

Complexity Theory of Polynomial-Time Problems

I. Approaches to bounding the exponent of matrix multiplication

arxiv: v1 [cs.ds] 20 Nov 2015

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Fast Convolution; Strassen s Method

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

Circuits. Lecture 11 Uniform Circuit Complexity

More on NP and Reductions


The minimum G c cut problem

NP-completeness. Chapter 34. Sergey Bereg

Combinatorial Optimization

2 P vs. NP and Diagonalization

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

CS5371 Theory of Computation. Lecture 19: Complexity IV (More on NP, NP-Complete)

Hardness of Approximation

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Algorithms 6.5 REDUCTIONS. designing algorithms establishing lower bounds classifying problems intractability

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science

Algorithms Exam TIN093 /DIT602

Making Nearest Neighbors Easier. Restrictions on Input Algorithms for Nearest Neighbor Search: Lecture 4. Outline. Chapter XI

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

A New Combinatorial Approach For Sparse Graph Problems

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

arxiv: v1 [cs.ds] 27 Mar 2017

Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments

Cographs; chordal graphs and tree decompositions

1 Review of Vertex Cover

Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can)

Dynamic Programming: Shortest Paths and DFA to Reg Exps

ENEE 459E/CMSC 498R In-class exercise February 10, 2015

NP-Complete Reductions 2

On the Exponent of the All Pairs Shortest Path Problem

15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete

Fast Matrix Product Algorithms: From Theory To Practice

Complexity Theory of Polynomial-Time Problems

NP and Computational Intractability

Classes of Boolean Functions

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

NP-Completeness. A language B is NP-complete iff B NP. This property means B is NP hard

Geometric Steiner Trees

A FINE-GRAINED APPROACH TO

Dynamic Programming on Trees. Example: Independent Set on T = (V, E) rooted at r V.

NP-Complete Reductions 1

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

CS21 Decidability and Tractability

Finding a Heaviest Triangle is not Harder than Matrix Multiplication

Applied Computer Science II Chapter 7: Time Complexity. Prof. Dr. Luc De Raedt. Institut für Informatik Albert-Ludwigs Universität Freiburg Germany

arxiv: v1 [cs.dm] 26 Apr 2010

CS154, Lecture 13: P vs NP

CS21 Decidability and Tractability

Limitations of Algorithm Power

ACO Comprehensive Exam October 14 and 15, 2013

Lecture 24 Nov. 20, 2014

Graphs with Large Variance

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Lecture 59 : Instance Compression and Succinct PCP s for NP

Welcome to... Problem Analysis and Complexity Theory , 3 VU

Tensor Rank is Hard to Approximate

More NP-Complete Problems

Preliminaries and Complexity Theory

ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

From Fundamentele Informatica 1: inleverdatum 1 april 2014, 13:45 uur. A Set A of the Same Size as B or Larger Than B. A itself is not.

Trees. A tree is a graph which is. (a) Connected and. (b) has no cycles (acyclic).

This section is an introduction to the basic themes of the course.

CS154, Lecture 13: P vs NP

Finding, Minimizing, and Counting Weighted Subgraphs

Introduction to Complexity Theory

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

Transcription:

Complexity Theory of Polynomial-Time Problems Lecture 8: (Boolean) Matrix Multiplication Karl Bringmann

Recall: Boolean Matrix Multiplication given n n matrices A, B with entries in {0,1} compute matrix C with C +,, = 1 /23 A +,/ B /,, what we already know about BMM: BMM is in time O(n 6 / logn) (four Russians) BMM is equivalent to computing the Transitive Closure of a given graph BMM can be reduced to APSP O(n 6 /2?@A 1 ) n n n

Exponent of Matrix Multiplication define ω as the infimum over all c such that MM has an O(n F ) algorithm note: MM is in time O(n GHI ) for any ε > 0 we will be sloppy and write: MM is in time O(n G ) note: MM is not in time O(n GNI ) for any ε > 0 ω 2 Thm: ω < 3 this is very fast in theory all these algorithms have impractically large constant factors (maybe except Strassen 69) ω Strassen 69 2.81 Pan 78 2.79 Bini et al. 79 2.78 Schönhage 80 2.52 Romani 80 2.52 Coppersmith,Winograd 81 2.50 Strassen 86 2.48 Coppersmith,Winograd 90 2.376 Stothers 10 2.374 Vassilevska-Williams 11 2.37288 Le Gall 14 2.37287

Boolean Matrix Multiplication Thm: BMM is in time O(n G ) given n n matrices A, B with entries in {0,1} Q compute standard matrix product C with C +,, = 1 +23 A +,/ B /,, Q define matrix C with C +,, = [C +,, > 0] then C is the Boolean matrix product of A and B Hypothesis: BMM is not in time O(n GNI )

Combinatorial Algorithms fast matrix multiplication uses algebraic techniques which are impractical combinatorial algorithms : do not use algebraic techniques not well defined! Arlazarov,Dinic,Kronrod, Faradzhev 70 (four russians) Bansal,Williams 09 Chan 15 Yu 15 O(n 6 /log V n) O(n 6 log log n V /log W/X n) O(n 6 log log n 6 /log 6 n) O(n 6 poly log log n /log X n) Hypothesis: BMM has no combinatorial algorithm in time O(n 6NI )

In this lecture you learn that... (B)MM is useful for designing theoretically fast algorithms - Exercise: k-clique in O(n G//6 ) - Exercise: MaxCut in O(2 G1/6 poly(n)) - Node-Weighted Negative in O(n G ) BMM is an obstacle for practically fast / theoretically very fast algorithms no combinatorial O(n 6NI ) not faster than O(n V.6\6 ) - Transitive Closure has no O(n GNI ) / combinatorial O(n 6NI ) algorithm - Exercise: pattern matching with 2 patterns - Sliding Window Hamming Distance - context-free grammar problems

Outline 1) Relations to Subcubic Equivalences 2) Strassen s Algorithm 3) Sliding Window Hamming Distance 4) Node-Weighted Negative 5) Context-Free Grammars

Corollaries from Subcubic Equivalences APSP BMM All-Pairs- given an unweighted graph G vertices V = I J K i, j: are they in a triangle with some k? given an unweighted graph G does it contain a triangle? Min-Plus Product All-Pairs- Negative- Negative [Vassilevska-Williams,Williams 10]

Corollaries from Subcubic Equivalences APSP BMM All-Pairs- n I K J n A B n Min-Plus Product All-Pairs- Negative- Negative

Corollaries from Subcubic Equivalences APSP BMM All-Pairs- Given an unweighted undirected graph G Adjacency matrix A, entries in {0,1} 1. Compute Boolean Product C A A: C +,, = k A +,/ A /,, / 2. Compute A +,, C +,, +,, Min-Plus Product All-Pairs- Negative- this equals k A +,, A +,/ A /,, +,,,/ thus we solved triangle detection Negative

Corollaries from Subcubic Equivalences APSP BMM All-Pairs- Min-Plus Product All-Pairs- Negative- Negative

All-Pairs- to Given graph G Decide whether there are vertices i, j, k such that i, j, k form a triangle All-Pairs- All-Pairs- Given graph G with vertex set V = I J K Decide for every i I, j J whether there is a vertex k K such that i, j, k form a triangle K Split I, J, K into n/s parts of size s: I 3,, I 1/r, J 3,, J 1/r, K 3,, K 1/r I K p J For each of the (n/s) 6 triples (I n, J o, K p ): consider graph G[I n J o K p ] I n J o

All-Pairs- to Initialize C as n n all-zeroes matrix For each of the (n/s) 6 triples of parts (I n, J o, K p ): While G[I n J o K p ] contains a triangle: Find a triangle (i, j, k) in G[I n J o K p ] Set C i, j 1 Delete edge (i, j) (i, j) is in no more triangles K All-Pairs- guaranteed termination: can delete n V edges I K p J correctness: if (i, j) is in a triangle, we will find one I n J o

All-Pairs- to Find a triangle (i, j, k) in G[I n J o K p ] How to find a triangle if we can only decide whether one exists? All-Pairs- Partition I n into I n (3), I n (V), J o into J o (3), J o (V), K p into K p (3),K p (V) Since G[I n J o K p ] contains a triangle, at least one of the 2 6 subgraphs G[I (s) n J (t) o K (F) p ] contains a triangle K p Decide for each such subgraph whether it contains a triangle I n J o Recursively find a triangle in one subgraph

All-Pairs- to Find a triangle (i, j, k) in G[I n J o K p ] How to find a triangle if we can only decide whether one exists? All-Pairs- Partition I n into I n (3), I n (V), J o into J o (3), J o (V), K p into K p (3),K p (V) Since G[I n J o K p ] contains a triangle, at least one of the 2 6 subgraphs G[I n (s) J o (t) K p (F) ] contains a triangle Decide for each such subgraph whether it contains a triangle Recursively find a triangle in one subgraph Running Time: T vwxyz{w xa?} n 2 6 T ~}wy}z{w xa?} (n) + T vwxyz{w xa?} n/2 = O(T ~}wy}z{w xa?} n )

All-Pairs- to Initialize C as n n all-zeroes matrix For each of the (n/s) 6 triples of parts (I n, J o, K p ): While G[I n J o K p ] contains a triangle: Find a triangle (i, j, k) in G[I n J o K p ] Set C i, j 1 Delete edge (i, j) ( ) All-Pairs- Running Time: ( ) = O(T vwxyz{w xa?} (s)) = O(T ~}wy}z{w xa?} (s)) Total time: #triples + #triangles found ( ) n/s 6 + n V T ~}wy}z{w xa?} (s) Set s = n 3/6 and assume T ~}wy}z{w xa?} n = O(n 6NI ) Total time: O n V n 3NI/6 = O(n 6NI/6 )

Corollaries from Subcubic Equivalences APSP BMM All-Pairs- If BMM has (combinatorial) O(n 6NI ) algorithm then has (combinatorial) O n 6NI algorithm If has (combinatorial) O(n 6NI ) algorithm then BMM has (combinatorial) O n 6NI/6 algorithm subcubic equivalent, but this mainly makes sense for combinatorial algorithms Min-Plus Product All-Pairs- Negative- Negative

Outline 1) Relations to Subcubic Equivalences 2) Strassen s Algorithm 3) Sliding Window Hamming Distance 4) Node-Weighted Negative 5) Context-Free Grammars

Strassen s Algorithm shows ω 2.81 A B C A 3,3 A 3,V A V,3 A V,V B 3,3 B 3,V = B V,3 B V,V C 3,3 C 3,V C V,3 C V,V C 3,3 = A 3,3 B 3,3 + A 3,V B V,3 C 3,V = A 3,3 B 3,V + A 3,V B V,V C V,3 = A V,3 B 3,3 + A V,V B V,3 T n 8 T n/2 + O(n V ) T n O(n 6 ) C V,V = A V,3 B 3,V + A V,V B V,V

Strassen s Algorithm shows ω 2.81 A B C A 3,3 A 3,V A V,3 A V,V B 3,3 B 3,V = B V,3 B V,V C 3,3 C 3,V C V,3 C V,V M 3 = A 3,3 + A V,V B 3,3 + B V,V M V = A V,3 + A V,V B 3,3 M 6 = A 3,3 B 3,V B V,V M X = A V,V B V,3 B 3,3 M = A 3,3 + A 3,V B V,V M = A V,3 A 3,3 B 3,3 + B 3,V M \ = A 3,V A V,V B V,3 + B V,V C 3,3 = M 3 + M X M + M \ C 3,V = M 6 + M C V,3 = M V + M X C V,V = M 3 M V + M 6 + M T n 7 T n/2 + O(n V ) T n O n?@a \ = O(n V. \X )

Faster Matrix Multiplication tensor = 3-dimensional matrix matrix multiplication tensor: n V rows/columns/... entries in {0,1} (i Q, k Q ) entry T +,,, +,/, /,, = 1 iff i = i and j = j QQ and k Q = k QQ i.e. A +,/ B /,, appears in C +,, (i, j) (k QQ, j QQ ) matrix of rank 1: outer product of two vectors matrix of rank r: sum of r rank-1-matrices tensor of rank 1: outer product of three vectors tensor of rank r: sum of r rank-1-tensors matrix rank is in P tensor rank is not known to be in P

Faster Matrix Multiplication tensor = 3-dimensional matrix matrix multiplication tensor: n V rows/columns/... entries in {0,1} (i Q, k Q ) entry T +,,, +,/, /,, = 1 iff i = i and j = j QQ and k Q = k QQ i.e. A +,/ B /,, appears in C +,, (i, j) (k QQ, j QQ ) Strassen: rank of MM-tensor for n = 2 is at most 7 any bound on rank of MM-tensor can be transformed into a MM-algorithm thus search for faster MM-algorithms is a mathematical question this is complete: one can find ω by analyzing tensor rank!

Outline 1) Relations to Subcubic Equivalences 2) Strassen s Algorithm 3) Sliding Window Hamming Distance 4) Node-Weighted Negative 5) Context-Free Grammars

Sliding Window Hamming Distance given two strings: text T of length n and pattern P of length m < n compute for each i the Hamming distance of P and T[i.. i + m 1] best known algorithm: O(n m polylog n) O(n 3. 3 ) [Indyk,Porat,Clifford 09] a b c b b c a a b b c a b b c a b b c a b b c a b b c a 2 3 3 0 2 Thm: Sliding Window Hamming Distance has no O(n G/VNI ) algorithm or combinatorial O(n 3. NI ) algorithm unless the BMM-Hypothesis fails O(n 3.3 ) Open Problem: get rid of combinatorial or design improved algorithm using MM

Sliding Window Hamming Distance given two strings: text T of length n and pattern P of length m < n compute for each i the Hamming distance of P and T[i.. i + m 1] A 1 0 0 1 1 1 0 1 1 B 1 1 0 1 0 1 =? 0 1 1 1 x x 1 2 3 x 2 3 1 1 y 2 y 2 y 3 3 alphabet: {1,2,...,n,x,y,$} pattern = concat rows: 1 x x 1 2 3 x 2 3 text = concat columns + padding: $ $ $ $ $ $ 1 2 y $ 1 y 3 $ y 2 3 $ $ $ $ $ $ 1 x x 1 2 3 x 2 3

Sliding Window Hamming Distance $ $ $ $ $ $ 1 2 y $ 1 y 3 $ y 2 3 $ $ $ $ $ $ 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 1 x x 1 2 3 x 2 3 put a 1 if there is at least one match 1 0 0 1 1 1 0 1 1 1 1 0 1 0 1 = 0 1 1 1 1 0 1 1 1 1 1 1

Sliding Window Hamming Distance given Boolean n n-matrices A, B we construct text+pattern of length O(n V ) (in time O n V ) thus, an O(n G/VNI ) algorithm for Sliding Window Hamming Distance would yield an O(n GNVI ) algorithm for BMM, contradicting BMM-Hypothesis and an O(n 3. NI ) combinatorial algorithm for Sliding Window Hamming Dist. would yield an O(n 6NVI ) combinatorial algorithm for BMM Thm: Sliding Window Hamming Distance has no O(n G/VNI ) algorithm or combinatorial O(n 3. NI ) algorithm unless the BMM-Hypothesis fails

Outline 1) Relations to Subcubic Equivalences 2) Strassen s Algorithm 3) Sliding Window Hamming Distance 4) Node-Weighted Negative 5) Context-Free Grammars

Node-Weighted Negative (Edge- Weighted) Negative O(n 6 ) given a directed graph with weights w +,, on edges, is there a triangle i, j, k: w,,+ + w +,/ + w /,, < 0? Node- Weighted Negative given a directed graph with weights w + on nodes, is there a triangle i, j, k: w + + w, + w / < 0? w + 1 O(n G ) given an unweighted undirected graph, is there a triangle?

Node-Weighted Negative (Edge- Weighted) Negative Node- Weighted Negative O(n 6 ) find appropriate edge weights that simulate the given node weights: set w +,, (w + + w, )/2 then for a triangle i, j, k: w,,+ + w +,/ + w /,, = w + + w, + w / O(n G )

Node-Weighted Negative (Edge- Weighted) Negative O(n 6 ) Node- Weighted Negative O(n G ) [Czumaj,Lingas 07] actually for Node-Weighted Minimum Weight O(n G )

Node-Weighted Minimum Weight we can assume that the graph is tripartite: G: G Q : u V u u 3 v V u 6 v V V V v 3 V 3 v 6 V 6 triangle i, j, k triangle i 3,j V, k 6

Node-Weighted Minimum Weight given graph G = (V, E) with V = I J K and node weights w Ÿ, compute minimum weight q s.t. there are i I, j J, k K with i, j, j, k, k, i E and w + + w, + w / = q - assume that I, J, K are sorted by weight - parameter p (=sufficiently large constant) - assume n I = J = K = p l for some l N (add isolated dummy vertices)

Node-Weighted Minimum Weight given graph G = (V, E) with V = I J K and node weights w Ÿ, compute minimum weight q s.t. there are i I, j J, k K with i, j, j, k, k, i E and w + + w, + w / = q K - assume that I, J, K are sorted by weight - parameter p (=sufficiently large constant) - assume n I = J = K = p l for some l N (add isolated dummy vertices) I K p J ALG(G): 0) if n = O(1) then solve in constant time I n J o 1) split I = I 3 I, J = J 3 J, K = K 3 K (in sorted order: max w(i n ) min w(i nh3 ) and so on) 2) R x, y, z 1,, p 6 G I n J o K p contains a triangle} 3) for each x, y, z R s.t. there is no x Q, y Q, z Q R with x Q < x, y Q < y, z Q < z run ALG(G I n J o K p )

Node-Weighted Minimum Weight 3) for each x, y, z R s.t. there is no x Q, y Q, z Q R with x Q < x, y Q < y, z Q < z run ALG(G I n J o K p ) Correctness: if there is no triangle in G I n J o K p then we can ignore it if x, y, z R is dominated by x Q, y Q, z Q R: let i, j, k be a triangle in G I n J o K p, and i, j, k a triangle in G I n J o K p then w + + w, + w / min w(i n ) + min w(j o ) + min w(k p ) and w + + w, + w / max w(i n ) + max w(j o ) + max w(k p ) so we can safely ignore G I n J o K p

Node-Weighted Minimum Weight given graph G = (V, E) with V = I J K and node weights w Ÿ, compute minimum weight q s.t. there are i I, j J, k K with i, j, j, k, k, i E and w + + w, + w / = q K - assume that I, J, K are sorted by weight - parameter p (=sufficiently large constant) - assume n I = J = K = p l for some l N (add isolated dummy vertices) I K p J ALG(G): 0) if n = O(1) then solve in constant time I n J o 1) split I = I 3 I, J = J 3 J, K = K 3 K (in sorted order: max I n min I nh3 aso.) 2) R x, y, z 1,, p 6 G I n J o K p contains a triangle} O(p 6 n G ) 3) for each x, y, z R s.t. there is no x Q, y Q, z Q R with x Q < x, y Q < y, z Q < z run ALG(G I n J o K p ) size n/p how many iterations?

Node-Weighted Minimum Weight 3) for each x, y, z R s.t. there is no x Q, y Q, z Q R with x Q < x, y Q < y, z Q < z How many iterations? define Δ,r x, y, z 1,, p 6 x y = r and x z = s} = 1,1 r, 1 s, 2,2 r, 2 s,, p, p r, p s 1,, p 6 for r, s { p,, p} the sets Δ,r cover 1,, p 6 line 3) applies to at most one (x, y, z) in Δ,r for any r, s! hence, there are at most (2p) V = 4p V recursive calls to ALG recursion: T n 4p V T n/p + O(p 6 n G )

Node-Weighted Minimum Weight given graph G = (V, E) with V = I J K and node weights w Ÿ, compute minimum weight q s.t. there are i I, j J, k K with i, j, j, k, k, i E and w + + w, + w / = q K - assume that I, J, K are sorted by weight - parameter p (=sufficiently large constant) - assume n I = J = K = p l for some l N (add isolated dummy vertices) I K p J ALG(G): 0) if n = O(1) then solve in constant time I n J o 1) split I = I 3 I, J = J 3 J, K = K 3 K (in sorted order: max I n min I nh3 aso.) 2) R x, y, z 1,, p 6 G I n J o K p contains a triangle} O(p 6 n G ) 3) for each x, y, z R s.t. there is no x Q, y Q, z Q R with x Q < x, y Q < y, z Q < z run ALG(G I n J o K p ) size n/p recursion: T n 4p V T n/p + O(p 6 n G )

Node-Weighted Minimum Weight recursion: T n 4p V T n/p + O(p 6 n G ) T n 4p V T n/p + α p 6 n G for some constant α want to show: T n 2α p 6 n G plug in: T n 4p V 2α p 6 (n/p) G +α p 6 n G = α p 6 n G (1 + 8p VNG ) assume ω > 2: 2α p 6 n G for a sufficiently large constant p so we have: T n O(n G ) (if ω = 2: show that T n 2α p 6 n GHI ) in total: T n O(n G + n VHI ) for any ε > 0

In this lecture you learned that... (B)MM is useful for designing theoretically fast algorithms - Exercise: k-clique in O(n G//6 ) - Exercise: MaxCut in O(2 G1/6 poly(n)) - Node-Weighted Negative in O(n G ) BMM is an obstacle for practically fast / theoretically very fast algorithms no combinatorial O(n 6NI ) not faster than O(n V.6\6 ) - Transitive Closure has no O(n GNI ) / combinatorial O(n 6NI ) algorithm - Exercise: pattern matching with 2 patterns - Sliding Window Hamming Distance - context-free grammar problems