Today’s keynote is by Moshe Vardi (Google Scholar) talking about… SAT solving! This should be fun.

**P=NP**

We all know about the P versus NP problem, this is a CS problem that is so important that even mathematicians think it matters. Consider these two problems the Hamilton cycle and the Eulerian cycle: visit all nodes versus all edges of a graph. The Eulerian cycle of course is know for the Konigsburg bridges.

We measure the complexity of problem by counting how many operations it would take in a Turing machine. Many problems can be solved in polynomial times, like checking if a number is even or a square, or… do an Eulerian cycle in a graph. For the Hamilton cycle on the other hand we do not know if we can do it in polynomial time.

An important feature of an problem in P is that when you get an answer, you can verify it in polynomial time. When you get a cycle it is easy to see if it covers all nodes or edges, you have to check all of them at most.

**Why does this matter?**

Why is it so important to know whether P = NP? Moshe shares a fun story where the head of his institute (a physicist) asked why computer scientist do not just accept is, like they do in physics? Firstly, it means that we do not really understand the problem. Secondly, all sorts of encryption like RSA would not work anymore so that would by quite problematic…! And not everyone believes P is not NP, Donald Knuth for example believes P is NP.

**NP-Complete **

NP complete problems are the hardest problems in NP, and the nice thing about them is that when we solve one, we solve all of them, because they can be transformed into each other! More about this later in the talk.

**Boolean Logic**

Boole had the insight that and and or had similar properties and addition and multiplication, and this led to Boolean algebra. In Boolean logic, there is the problem of boolean satistiability: given a boolean expression, like

x /\ y \/ z

Satisfiability is the problem that asks: Is there and assignment to the variables such that the formula is true? Already in the 1850’s a follower of Boole himself, William Jevons realized that it was really hard, and he built a wooden machine (looking like a piano) that could do it for 4 variables.

**Boolean Reasoning**

In the fifties people in the American government (the NSA!) started working on it, in 1958 Davis and Putnam described their method (DPLL). Interesting that the NSA then already realized it was very interesting! DPLL was state of the art for a very long time. In the mid 90’s a revolution started with CDCL conflict-driven clause learning, with these heuristics:

- back jumping
- smart unit-clause preference
- conflict-driven learning
- smart choice heuristics
- restarts

These are implemented in tools like GRASP (1996) and Chaff (2001). They have a capacity of millions of variables. Here is a picture of the immense speed up of SAT solvers over the years:

**Applications to SE**

SAT solvers are now so good, that they can really be used by people in practice, and Moshe gives Microsoft credit for this. Their Z3 solver is used in software development all around Microsoft (not just research) What are some of the applications of SAT solving?

**Verification**

There are two different from of verification: formal and dynamic. Formal is proofs, while dynamic encompasses simulation, testing and validation. Moshe estimated that this second category is about 90% of all verification activities in practice. A practical example is to verify issues with floating point division. It is impossible to try all values of x and y for a 128 bit chip. The manual approach is to read the specs and then write test cases. Bit this is like programming so you can only realistically generate a few test cases per day, which ran on the simulator in a few minutes, which would then sit idle for the entire day!

Moshe was asked to improve this, however, it was not easy since the testers were too busy to talk to him 🙂

Wouldn’t it be nice if we could generate a lot of test cases? And not entirely random, preferably we want to have constraints and then generate random and uniform solutions. This problem turned out to have many different applications, among which is generating quizzes in MOOCs!

When Moshe looked into it, there was already a solution to this, the Bellare Petranc Goldreich algorithm, but it did not scale as it was, it could only do 16 variables (remember modern SAT solvers could to millions already!) By using uniform hashing, they were able to scale to millions of variables. However, entirely uniform was too hard, but the algorithm can be configured to be more precise (however it will be slower then) The approach is as follows:

The interesting problem is how to map the problem space into a nice “grid” of solutions, and that is where universal hashing comes in. XOR based hashing is used, there every variable is selected with a probability of 1/2 meaning every choice splits the solution in half.

**SMT: Satisfiability Module Theory**

The successes that were achieved in SAT solvers lead people to think about richter problems to be solved with solvers. One of the problems is counting: how many solutions does this formula have? This is again a problem with many applications, including Baysian inference and planning with uncertainty. Counting a lot harder then just solving, the state of the was about 50 variables. Here too an approximate method was applied by Moshe, where again the solution space was split in such small cells, that for each cell the number of solutions can be counted. If you do this a number of times, each time the conference will increase (but of course also the time spent calculating)

**Wrap up**

In SAT solvers, an amazing progress had been made, but not everyone is aware of it. Moshe’s solution is to call it “deep solving” from now on 🙂

There are so many applications that SAT could be useful for, and the only thing you have to do is to transform it to SAT. So the challenge the audience is to think about where you can apply it! If you use C# or F#, you could use my easy library for that 😉

There are a number of interesting open, theoretical problems, like understanding why some formulas with millions of variables are easy to solve while smaller ones are a lot harder.

One of the coolest takeaways for me was that this idea that if N = NP than the world would be a difference place, but we are making such great progress that it might not matter all that much any more (apart from the million dollars you would get!)