Linear Programming Formulation of Boolean Satisfiability Problem

Similar documents
NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

Essential facts about NP-completeness:

A Collection of Problems in Propositional Logic

NP Complete Problems. COMP 215 Lecture 20

SAT, NP, NP-Completeness

CMSC 858F: Algorithmic Lower Bounds Fall SAT and NP-Hardness

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

The Cook-Levin Theorem

NP-Complete Reductions 1

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

CMSC 858F: Algorithmic Lower Bounds Fall SAT and NP-Hardness

NP-Completeness and Boolean Satisfiability

SAT, Coloring, Hamiltonian Cycle, TSP

Comp487/587 - Boolean Formulas

NP and Computational Intractability

On the size of minimal unsatisfiable formulas

NP-Completeness. NP-Completeness 1

NP and NP Completeness

An new hybrid cryptosystem based on the satisfiability problem

On the Structure and the Number of Prime Implicants of 2-CNFs

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Worst-Case Upper Bound for (1, 2)-QSAT

Complexity, P and NP

Non-Deterministic Time

Harvard CS 121 and CSCI E-121 Lecture 22: The P vs. NP Question and NP-completeness

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

On the Gap between the Complexity of SAT and Minimization for Certain Classes of Boolean Formulas

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

Keywords. Approximation Complexity Theory: historical remarks. What s PCP?

Automata Theory CS Complexity Theory I: Polynomial Time

Correctness of Dijkstra s algorithm

Lecture 4: NP and computational intractability

Detecting Backdoor Sets with Respect to Horn and Binary Clauses

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Intro to Theory of Computation

Nondeterministic Turing Machines

NP Completeness and Approximation Algorithms

CSC 1700 Analysis of Algorithms: P and NP Problems

A brief introduction to Logic. (slides from

NP Completeness and Approximation Algorithms

Polynomial-time Reductions

CSE 105 THEORY OF COMPUTATION

CSCI3390-Lecture 16: NP-completeness

PROPOSITIONAL LOGIC. VL Logik: WS 2018/19

A Lower Bound for Boolean Satisfiability on Turing Machines

NP and NP-Completeness

NP-completeness. Chapter 34. Sergey Bereg

Tecniche di Verifica. Introduction to Propositional Logic

Polynomial-time reductions. We have seen several reductions:

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.

Theory of Computer Science. Theory of Computer Science. E5.1 Routing Problems. E5.2 Packing Problems. E5.3 Conclusion.

Graduate Algorithms CS F-21 NP & Approximation Algorithms

arxiv: v1 [cs.ds] 10 Jun 2009

About the impossibility to prove P NP or P = NP and the pseudo-randomness in NP

Algorithmic Model Theory SS 2016

1 Algebraic Methods. 1.1 Gröbner Bases Applied to SAT

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Theory of Computation Time Complexity

Introduction. Pvs.NPExample

CSE 555 HW 5 SAMPLE SOLUTION. Question 1.

Limitations of Algorithm Power

Clause Shortening Combined with Pruning Yields a New Upper Bound for Deterministic SAT Algorithms

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Computational Complexity

Polynomial-Time Reductions

CSE 135: Introduction to Theory of Computation NP-completeness

Computability and Complexity Theory: An Introduction

Graph Theory and Optimization Computational Complexity (in brief)

Polynomial time reduction and NP-completeness. Exploring some time complexity limits of polynomial time algorithmic solutions

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

Show that the following problems are NP-complete

Introduction to Computational Complexity

Welcome to... Problem Analysis and Complexity Theory , 3 VU

CISC 4090 Theory of Computation

More NP-Complete Problems

P, NP, NP-Complete, and NPhard

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

There are two types of problems:

A Lower Bound of 2 n Conditional Jumps for Boolean Satisfiability on A Random Access Machine

Algorithm Design and Analysis

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

Critical Reading of Optimization Methods for Logical Inference [1]

Integer vs. constraint programming. IP vs. CP: Language

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

Introduction to Advanced Results

Tractability. Some problems are intractable: as they grow large, we are unable to solve them in reasonable time What constitutes reasonable time?

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

NP-Complete Reductions 2

arxiv: v2 [cs.ds] 17 Sep 2017

COMP Analysis of Algorithms & Data Structures

Transcription:

Linear Programming Formulation of Boolean Satisfiability Problem Maknickas Algirdas Antano Abstract With using of multi-nary logic 2 SAT boolean satisfiability was formulated as linear programming optimization problem. The same linear programming formulation was extended to finding of 3 SAT and k SAT boolean satisfiability for k greater than 3. Using new analytic formulas and linear programming formulation of boolean satisfiability proposition that k SAT is in P and could be solved in linear time was proved. Key words: Boolean satisfiability, conjunctive normal form, multivalued logic, linear programming, time complexity, NP problem 1 Introduction The Boolean satisfiability (SAT) problem [1] is defined as follows: given a Boolean formula, check whether an assignment of Boolean values to the propositional variables in the formula exists, such that the formula evaluates to true. If such an assignment exists, the formula is said to be satisfiable; otherwise, it is unsatisfiable. For a formula with n variables and m clauses the conjunctive normal form (CNF) (X 1 X 2 ) (X 3 X 4 ) (X n 1 X n ) (1) is most the frequently used for representing Boolean formulas, where X i are independent and in most cases m n. In CNF, the variables of the formula appear in literals (e.g., x) or their negation (e.g., x (logical NOT )). Literals are grouped into clauses, which represent a disjunction (logical OR ) of the literals they con- Maknickas Algirdas Antano Vilnius Gediminas Technical University, Sauletekio al. 11, Vilnius, Lithuania, e-mail: algirdas.maknickas@vgtu.lt 1

2 Maknickas Algirdas Antano tain. A single literal can appear in any number of clauses. The conjunction (logical AND ) of all clauses represents a formula. Several algorithms are known for solving the 2 - satisfiability problem; the most efficient of them take linear time [2, 3, 4]. Instances of the 2-satisfiability or 2SAT problem are typically expressed as 2-CNF or Krom formulas [2] SAT was the first known NP-complete problem, as proved by Cook and Levin in 1971 [1] [5]. Until that time, the concept of an NP-complete problem did not even exist. The problem remains NP-complete even if all expressions are written in conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem. This means the expression has the form: (X 1 X 2 X 3 ) (X 4 X 5 X 6 ) (X n 2 X n 1 X n ) (2) NP-complete and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. In 2010, Moustapha Diaby provided two further proofs for P=NP. His papers Linear programming formulation of the vertex colouring problem and Linear programming formulation of the set partitioning problem give linear programming formulations for two well-known NP-hard problems [6, 7]. 2 Expressions of multi-valued logic Let describe integer discrete logic units as i, where i Z +. Let describe discrete function g n k (a), a R, k {0,1,2,...,n 1} as g n k (a)= a +k (mod n) (3) As mentioned in [8, 9] function g n k (a) is one variable multi-nary logic generation function for multi-nary set {0,1,2,..,n 1} The are n n one variable logic functions: i 0 i 1 i 2 ρ (a)=... i n 1 a 0 g n i 0 (a) 1 g n i 1 (a) 2 g n i 2 (a)...... n 1 g n i n 1 (a) The are n n2 two variables logic functions:, i 0 i 1 i 2... i n 1 {0,1,2,...,n 1} (4)

Linear Programming Formulation of Boolean Satisfiability Problem 3 i 0,0 i 0,1 i 0,2... i 0,n 1 i 1,0 i 1,1 i 1,2... i 1,n 1 µ i 2,0 i 2,1 i 2,2... i 2,n 1 =............... i n 1,0 i n 1,1 i n 1,2... i n 1,n 1 a \ b 0 1 2... n 1 0 g n i 0,0 (a b) g n i 0,1 (a b) g n i 0,2 (a b)... g n i 0,n 1 (a b) 1 2 g n i 1,0 (a b) g n i 1,1 (a b) g n i 1,2 (a b)... g n i 1,n 1 (a b) g n i 2,0 (a b) g n i 2,1 (a b) g n i 2,2 (a b)... g n, i 2,n 1 (a b).................. n 1 g n i n 1,0 (a b) g n i n 1,1 (a b) g n i n 1,2 (a b)... g n i n 1,n 1 (a b) i 0,0 i 0,1 i 0,2... i 0,n 1 i 1,0 i 1,1 i 1,2... i 1,n 1 i 2,0 i 2,1 i 2,2... i 2,n 1 {0,1,2,...,n 1} (5)............... i n 1,0 i n 1,1 i n 1,2... i n 1,n 1 Using of this expressions for logic of higher dimensions with non deterministic Turing machine enables to make counter of linear time complexity [9] The goal of this paper is the proof of proposition that ksat is in P and could be solved in linear time. For this purpose will be used one formula of multi-valued second order logic for function of two variables described above. 3 2SAT is in P Theorem 1. If all variables are unique, equation where maxβ 2 (X 1,X 2,...,X n 1,X n ) (6) β 2 (X 1,X 2,...,X n 1,X n )= µ(x 1 + X 2, µ(x 3 + X 4,..., µ(x n 3 + X n 2,X n 1 + X n ))) µ(a,b)= a \ b 0 1 2 0 0 0 0 1 0 1 1 2 0 1 1 and+ is algebraic summation could be solved for X i {0,1} in O(m). Proof. Let start from investigation of (7)

4 Maknickas Algirdas Antano f(x 1,x 2,...,x m )= in hyper-cube of sides[0,1]. This function is convex, because m i=1 m i=1 x i, x i R (8) x i m i=1 x iα i (9) 1 = m i=1 αi (10) It is obvious, on right side of (9) we have hyper-plane which goes through vertexes (0,0,0,..,0) and (1,1,1,..,1) of investigating hyper-cube. This hyper-plane has global maximum like a function f(x 1,x 2,...,x m ) and it equals 1. So we could resume that finding of global maximum of f(x 1,x 2,...,x m ) could be replaced by finding of global maximum of this hyper plane or m max i = i=1x 1 n n max x i (11) i=1 Let start to investigate (6) X i R,i {1,2,...,n}. Function β 2 could be calculated within O(m) [10]. According to (7) equation (6) could be rewritten as follow max µ(x 1 + X 2, µ(x 3 + X 4,..., µ(x n 3 + X n 2,X n 1 + X n )))= max µ(x 1 + X 2,max µ(x 3 + X 4,...,max µ(x n 3 + X n 2,X n 1 + X n ))) (12) So, m local partial maximums must satisfy equalities max µ(x k 1 + X k )=1 to avoid 0 result of global maximum. This leads to inequalities 1 (X k 1 + X k ) 2 but if we want to use (11) we must leave (X k 1 + X k ) = 1 Now we could start to solve satisfiability problem as global maximum of (6). If we have all variables unique, (12) could be solved repeating solving of system of LP equations max k i=k 1 X i X k + X k 1 = 1 (13) Each of them has not zero max and could be solved using best known algorithm of linear programming [11] in O ( n 3.5) or in O(2 3.5 ). So all clauses should be optimized in O ( 2 3.5 m ) 3.1 Special cases Theorem 2. If some of unique variables are negations X i, equation maxβ 2 (X 1,X 2,...,X n 1,X n ) (14)

Linear Programming Formulation of Boolean Satisfiability Problem 5 could be solved for X i {0,1} in O(m). Proof. If we have all variables unique, let replace all negations X k with Xḱ and all others just renamed by X i. Now (14) could be solved repeating solving of system of LP equations max k i=k 1 X i Xḱ+ Xḱ 1 = 1 0 Xḱ 1 1 (15) 0 Xḱ 1 Each of them has not zero max and could be solved in O(2 3.5 ). Going back to old variables do not change complexity of each solution. So all clauses should be optimized in O ( 2 3.5 m ) Theorem 3. If some of variables in different clauses are not unique, equation could be solved for X i {0,1} in O(m). maxβ 2 (X 1,X 2,...,X n 1,X n ) (16) Proof.. If X n,x n 1 are unique, let start to solve (16) starting from a system of LP equations max n i=n 1 X i X n + X n 1 = 1 (17) 0 X n 1 1 0 X n 1 Now we know two unique variables. So maximum is reached and system of equation is solved in O(2 3.5 ). If X n = X n 1, X n = 1 X n 1 = 1. All other sub-equation could be solved as follow: if solving sub-equation has one variable with earlier found value, LP equations max k i=k 1 X i 1 X k + X k 1 2 (18) could be solved by reducing of (18) within inserting of earlier found variable value resigned to 1 (lower and upper bound could be increased in case the broken plane result to 1 earlier and than is flat in 2D space); if all variables of solving sub-equations (18) aren t unique, these variables could be reassigned to 1 and solving repeated within next clause; if solving sub-equation has two unique variables, LP equations max k i=k 1 X i X k + X k 1 = 1 (19) could be solved. So all clauses should be optimized in O ( 2 3.5 m )

6 Maknickas Algirdas Antano Theorem 4. If some of variables in different clauses are not unique and are negation each other, equation maxβ 2 (X 1,X 2,...,X n 1,X n ) (20) could be solved for X i {0,1} in O(v+e). Proof.. Let mark all variables they are unique or not starting from the end of CNF. Now cycle through n variables must be repeated to rename them to Xḱ so that new replaced variables marked as not unique will be negations which must be replaced with 1 Xḱ, if they found second time. In kind first time found not unique variable could be assigned to 0. This lead to value of 1 for negation. It could be done in O(2n). Now we start solving process for last clause. If Xń = Xń 1, Xń = 1. If Xń,Xń 1 are unique, let start to solve (20) from a system of LP equations max n i=n 1 X i Xń+ Xń 1=1 0 Xń 1 1 (21) 0 Xń 1 Now we know two or one unique variables. Aspvall, Plass and Tarjan [12] found the linear time procedure for solving 2- satisfiability instances, based on the notion of strongly connected components from graph theory. There are several efficient linear time algorithms for finding the strongly connected components of a graph, based on depth first search: Tarjan s strongly connected components algorithm [13] and the path-based strong component algorithm [14] each perform a single depth first search. Let choose the clause walking path as depth first search algorithm of the strongly connected components of a graph, where graph is formed as interconnected clauses with the not unique variables. So, other sub-equation could be solved as follow: if solving sub-equation has one variable with earlier found value, LP equations max k i=k 1 X i 1 X k + X k 1 2 (22) could be solved by reducing of (22) within inserting of earlier found variable value reassigned to 1 (lower and upper bound could be increased in case the broken plane result to 1 earlier and than is flat in 2D space); if all variables of solving sub-equations aren t unique and are negations of earlier found values in a different clauses, they values could be assigned to 0 and values of residual variables leads to 1; if all variables of solving sub-equations aren t unique and are negations of earlier found values in the same clause, one of variables must be assigned to 1 and other to 0; if at least two clauses, where X i X i, CNF is not satisfiable ;if solving sub-equation has two unique variables, LP equations

Linear Programming Formulation of Boolean Satisfiability Problem 7 max k i=k 1 X i X k + X k 1 = 1 (23) could be solved. Each sub-system of equation is solved in O(2 3.5 ) to reach global maximum 1. Finally, all clauses should be optimized at most v+e (v - amount of vertexes, e - amount of edges) times or in O ( 2 3.5 (v+e) ). 4 3SAT is in P Theorem 5. If all variables are unique, equation where β 3 (X 1,X 2,...,X n 1,X n )= maxβ 3 (X 1,X 2,...,X n 1,X n ) (24) µ(x 1 + X 2 + X 3, µ(x 4 + X 5 + X 6,..., µ(x n 5 + X n 4 + X n 3,X n 2 + X n 1 + X n ))) (25) µ(a,b)= a\b 0 1 2 3 0 0 0 0 0 1 0 1 1 1 2 0 1 1 1 3 0 1 1 1 (26) and+ is algebraic summation could be solved for X i {0,1} in O(m). Proof. Let start to investigate (24) when X i R,i {1,2,...,n}. Function β 3 could be calculated within O(m) [10]. According to (25) equation (24) could be rewritten as follow max µ(x 1 + X 2 + X 3, µ(x 4 + X 5 + X 6,..., µ(x n 5 + X n 4 + X n 3,X n 2 + X n 1 + X n ))) (27) So, m local partial maximums must satisfy equalities max µ(x k 2 +(X k 1 + X k )=1 to avoid 0 result of global maximum. This leads to inequalities 1 (X k 2 + X k 1 + X k ) 3 but if we want to use (11) we must leave (X k 2 + X k 1 + X k )=1 If we have all variables unique, (24) could be solved repeating solving of system of LP equations

8 Maknickas Algirdas Antano max k i=k 2 X i X k + X k 1 + X k 2 = 1 0 X k 2 1 (28) Each of them has not zero max and could be solved in O(3 3.5 ). So all clauses should be optimized in O ( 3 3.5 m ) 4.1 Special cases Theorem 6. If some of unique variables are negations X i, equation could be solved for X i {0,1} in O(m). maxβ 3 (X 1,X 2,...,X n 1,X n ) (29) Proof. If we have all variables unique, let replace all negations X k with Xḱ and all others just renamed by X i. Now (29) could be solved repeating solving of system of LP equations max k i=k 2 X i Xḱ+ Xḱ 1+ Xḱ 1 = 1 0 Xḱ 2 1 (30) 0 Xḱ 1 1 0 Xḱ 1 Each of them has not zero max and could be solved in O(3 3.5 ). Going back to old variables do not change complexity of each solution. So all clauses should be optimized in O ( 3 3.5 m ) Theorem 7. If some of variables in different clauses are not unique, equation could be solved for X i {0,1} in O(m). maxβ 3 (X 1,X 2,...,X n 1,X n ) (31) Proof. If X n,x n 1,X n 2 are unique, let start to solve (31) starting from a system of LP equations max n i=n 2 X i X n + X n 1 + X n 2 = 1 0 X n 2 1 (32) 0 X n 1 1 0 X n 1 Now we know three unique variables. Maximum is reached and system of equation is solved in O(3 3.5 ). If X n = X n 1, X n = 1 X n 1 = 1 X n 2 = 1. All other subequation could be solved as follow: if solving sub-equation has one or two variables

Linear Programming Formulation of Boolean Satisfiability Problem 9 with earlier found value, LP equations max k i=k 2 X i 1 X k + X k 1 + X k 2 3 0 X k 2 1 (33) could be solved by reducing of (33) with inserting of earlier found variables value reassigned to 1; if solving sub-equation three unique variables, LP equations max k i=k 2 X i X k + X k 1 + X k 2 = 1 0 X k 2 1 (34) could be found. So all clauses should be optimized in O ( 3 3.5 m ) Theorem 8. If some of variables in different clauses are not unique and are negation each other, equation maxβ 3 (X 1,X 2,...,X n 1,X n ) (35) could be solved for X i {0,1} in O(v+e). Proof. Let mark all variables they are unique or not starting from the end of CNF. Now cycle through n variables must be repeated to rename them to Xḱ so that new replaced variables marked as not unique will be negations which must be replaced with 1 Xḱ, if they found second time. In kind first time found not unique variable could be assigned to 0. This lead to value of 1 for negation. It could be done in O(2n). Now we start solving process for last clause. If not all variables are unique in the first clause, after sorting of variables so that third one will be negation and second one will be unique, LP equations maxxń+ Xń 1+1 Xń Xń+ Xń 1+1 Xń 2=1 0 Xń 2 1 (36) 0 Xń 1 1 0 Xń 1 could be solved. If Xń,Xń 1,Xń 2 are unique, let start to solve (25) from a system of LP equations max n i=n 2 X i Xń+ Xń 1+ Xń 2=1 0 Xń 2 1 (37) 0 Xń 1 1 0 Xń 1

10 Maknickas Algirdas Antano Now we know three or two unique variables. Let choose the clause walking path as depth first search algorithm of the strongly connected components of a graph, where graph is formed as interconnected clauses with the not unique variables. So, other sub-equation could be solved as follow: if solving sub-equation has one or two variable with earlier found value, LP equations max k i=k 2 X i ` 1 X + X `k 1 + X `k 2 3 0 X `k 2 1 (38) 0 X `k 1 1 0 X `k 1 where one or two of three variables X `k 2,X`k 1,X`k are equal 1 X q could be solved by reducing of (38) within inserting of earlier found variable value reassigned to 1; if all variables of solving sub-equations aren t unique and are negations of earlier found values in a different clauses, they values could be assigned to 0 and values of residual variables leads to 1; if all variables of solving sub-equations aren t unique and are negations of earlier found values in the same clause, one of variables must be assigned to 1 and other to 0; if solving sub-equation has three unique variables, LP equations max k i=k 2 X i X k + X k 1 + X k 2 = 1 0 X k 2 1 (39) could be solved. Each sub-system of equation is solved in O(3 3.5 ) to reach global maximum 1. Finally, all clauses should be optimized at most v+e (v - amount of vertexes, e - amount of edges) times or in O ( 3 3.5 (v+e) ). 5 ksat is in P Theorem 9. If all variables are unique, equation maxβ k (X 1,X 2,...,X n 1,X n ) (40) where β k (X 1,X 2,...,X n 1,X n )= k µ( X i, µ i=1 ( 2k X i,..., µ i=k+1 ( n k i=n 2k+1 X i, n i=n k+1 X i ))) (41)

Linear Programming Formulation of Boolean Satisfiability Problem 11 µ(a,b)= a\b 0 1 2... n 1 0 0 0 0... 0 1 0 1 1... 1 2 0 1 1... 1... n 1 0 1 1... 1 (42) and is algebraic summation could be solved for X i {0,1} in O ( k 3.5 m ). Proof.. Let start to investigate 40 when X i R,i {1,2,...,n}. Function β k could be calculated within O(m) [10]. According to 7 equation 40 could be rewritten as follow ( ( k 2k n k max µ( X i, µ X i,..., µ i=1 i=k+1 i=n 2k+1 X i, n i=n k+1 X i ))) If we have all variables unique, (41) could be solved repeating solving of system of LP equations max 2k i=k+1 X i 2k i=k+1 X i = 1 (44) 0 X i 1 i {1,2,...,n} Each of them has not zero max and could be solved in O(k 3.5 ). So all clauses should be optimized in O ( k 3.5 m ) (43) 5.1 Special cases Theorem 10. If some of unique variables are negations X i, equation could be solved for X i {0,1} in O ( k 3.5 m ). maxβ k (X 1,X 2,...,X n 1,X n ) (45) Proof.. If we have all variables unique, let replace all negations X k with Y k and all others just renamed by Y i. Now (45) could be solved repeating solving of system of LP equations max 2k i=k+1 Y i i=k+1 2k Y i = 1 (46) 0 Y i 1 i {1,2,...,n} Each of them has not zero max and could be solved in O(k 3.5 ). Going back to old variables do not change complexity of each solution. So all clauses should be optimized in O ( k 3.5 m ) Theorem 11. If some of variables in different clauses are not unique, equation

12 Maknickas Algirdas Antano could be solved for X i {0,1} in O ( k 3.5 m ). maxβ k (X 1,X 2,...,X n 1,X n ) (47) Proof.. If X n,x n 1,...,X n k+1 are unique, let start to solve (47) starting from a system of LP equations max n i=n k+1 X i n i=n k+1 X i = 1 (48) 0 X i 1 i {1,2,...,n} Now we know k unique variables. Maximum is reached and system of equation is solved in O(k 3.5 ). If X n = X n 1, X n = 1 X n 2 = 1,... All other sub-equation could be solved as follow: if solving sub-equation has at least one variable with earlier found value, LP equations max n i=n k+1 X i 1 k i=n k+1 X i k (49) 0 X i 1 i {1,2,...,n} could be solved by reducing of (49) with inserting of earlier found variables value reassigned to 1; if solving sub-equation k unique variables, LP equations max k i=n k+1 X i k i=n k+1 X i = 1 (50) 0 X i 1 i {1,2,...,n} could be solved. So all clauses should be optimized in O ( k 3.5 m ) Theorem 12. If some of variables in different clauses are not unique and are negation each other, equation could be solved for X i {0,1} in O ( k 3.5 (v+e) ). maxβ k (X 1,X 2,...,X n 1,X n ) (51) Proof.. Let mark all variables they are unique or not starting from the end of CNF. Now cycle through n variables must be repeated to rename them to Xḱ so that new replaced variables marked as not unique will be negations which must be replaced with 1 Xḱ, if they found second time. In kind first time found not unique variable could be assigned to 0. This lead to value of 1 for negation. It could be done in O(2n). Now we start solving process for last clause. If not all variables are unique in the first clause, after sorting of variables so that negations occurs at the end of the list, LP equations max l i=n k+1 X i + ( ) n i=l+1 1 X i l i=n k+1 X i + ( n i=l+1 1 X i) = 1 (52) 0 X i 1 i {1,2,...,n}

Linear Programming Formulation of Boolean Satisfiability Problem 13 could be solved. If Xń,Xń 1,...,Xń k+1 are unique, let start to solve (51) from a system of LP equations max n i=n k+1 X i n i=n k+1 X i = 1 (53) 0 X i 1 i {1,2,...,n} Now we know at most k unique variables. Let choose the clause walking path as depth first search algorithm of the strongly connected components of a graph, where graph is formed as interconnected clauses with the not unique variables. So, other sub-equation could be solved as follow: if solving sub-equation has at least one variable with earlier found value, LP equations max l i=k+1 X i + ( ) 2k i=l+1 1 X i 1 l i=k+1 X i + ( 2k i=l+1 1 X i) k (54) 0 X i 1 i {1,2,...,n} could be solved by reducing of (54) within inserting of earlier found variables values reassigned to 1; if all variables of solving sub-equations aren t unique and are negations of earlier found values in a different clauses, they values could be assigned to 0 and values of residual variables leads to 1; if all variables of solving sub-equations aren t unique and are negations of earlier found values in the same clause, one of variables must be assigned to 1 and other to 0; if solving sub-equation has k unique variables, LP equations max 2k i=k+1 X i 2k i=k+1 X i = 1 (55) 0 X i 1 i {1,2,...,n} could be solved. Each sub-system of equation is solved in O(k 3.5 ) to reach global maximum 1. Finally, all clauses should be optimized at most v+e (v - amount of vertexes, e - amount of edges) times or in O ( k 3.5 (v+e) ). 6 Conclusions Every 2 SAT problem is solvable using linear programming formulation in linear time. Every 3 SAT problem is solvable using linear programming formulation in linear time. Every k SAT problem is solvable using linear programming formulation in linear time for k>3. Every NP mathematical problem is reducible in polynomial time to k SAT problem and solvable in linear time if exist full, appropriate and correct knowledge

14 Maknickas Algirdas Antano basis for it and the time to get each item of knowledge basis is match less than calculation time on this items. References 1. Cook S. (1971), The complexity of theorem proving procedures. Proceedings of the Third Annual ACM Symposium on Theory of Computing, pp. 151 158. 2. Krom M. R. (1967), The Decision Problem for a Class of First-Order Formulas in Which all Disjunctions are Binary, Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik, vol. 13, pp. 15 20, doi:10.1002/malq.19670130104. 3. Aspvall B., Plass, M. F., Tarjan R. E. (1979), A linear-time algorithm for testing the truth of certain quantified boolean formulas, Information Processing Letters, vol. 8, no. 3, pp. 121 123, doi:10.1016/0020-0190(79)90002-4. 4. Even S., Itai A., Shamir A. (1976), On the complexity of time table and multi-commodity flow problems, SIAM Journal on Computing, vol. 5, no. 4, pp. 691 703, doi:10.1137/0205048. 5. Levin L. (1973). Universal search problems. Problems of Information Transmission 9 (3): 265-266. (Russian), translated into English by Trakhtenbrot, B. A. (1984). A survey of Russian approaches to perebor (brute-force searches) algorithms. Annals of the History of Computing, vol. 6, no. 4, pp. 384 400. doi:10.1109/mahc.1984.10036. 6. Diaby M. (2010), Linear programming formulation of the vertex colouring problem, Int. J. of Mathematics in Operational Research, vol. 2, no. 3, pp. 259 289. 7. Diaby M. (2010), Linear programming formulation of the set partitioning problem, Int. J. of Operational Research, vol. 8, no. 4., pp. 399 427. 8. Maknickas A. A., (2010). Finding of k in Fagin s R. Theorem 24, arxiv:1012.5804v1. 9. Maknickas A. A., (2012). How to solve ksat in polinomial time, arxiv:1203.6020v2. 10. Maknickas A. A., (2012). How to solve ksat in polinomial time, arxiv:1203.6020v1. 11. Adler I., Karmarkar N., Resende M. G. C. and Veiga G. (1989). An Implementation of Karmarkar s Algorithm for Linear Programming. Mathematical Programming, vol. 44, pp. 297 335. 12. Aspvall B., Plass M. F., Tarjan R. E. (1979), A linear-time algorithm for testing the truth of certain quantified boolean formulas, Information Processing Letters, vol. 8, no. 3, pp. 121 123. 13. Tarjan R. E. (1972), Depth-first search and linear graph algorithms, SIAM Journal on Computing, vol. 1, no. 2, pp. 146 160. 14. Cheriyan J., Mehlhorn K. (1996), Algorithms for dense graphs and networks on the random access computer, Algorithmica, vol. 15, no. 6, pp. 521 549.