Programming languages: Thinking the unthinkable — Tomas Petricek

Tomas is going to talk about Michel Foucault! Wow! From a book (missed the title) comes this excerpt:


The idea here is incommensurability, or as Foucault calls it episteme: the idea that two theories do not share a basis that would allow evaluating them using a common metric. There are a few other forms of this as well, like Kuhn’s paradigms, Lakatos research programmes and Poyanyi’s personal knowledge.

So, what are some of the common hidden assumptions in programming. An example:

“Programming is error-prone. It is even worse when programming a quantum computer or designing quantum communication protocols, because human intuition is much better adapted to the classical world than to the quantum world. How can we build automatic tools for verifying correctness of quantum programs? A logic for verification of both partial correctness and total correctness of quantum programs was developed in our TOPLAS ’2011 paper. The (relative) completeness of this logic was proved. Recently, a theorem prover for verification of quantum programs was built based on this logic [arXiv: 1601.03835]. To further develop automatic tools, we are working on techniques for invariant generation and synthesis of ranking functions (for termination analysis) of quantum programs.” (from:

What is happening here? The author says that programming is not intuitive. So… Are they going to make it more intuitive? They don’t! There is the assumption that formal methods can help, that is their research program. And you need a framework to get things done, it would just be great if we’d be more explicit about it.

Mark Priestly in Science of Operations talks about the research program on Algol, which was applying logic to programming

What research program we follow, not only influences how we ask questions, but also influences what questions we ask. A good question is to ask what questions we ask, as researchers but also groups (like PPIG). One assumption behind computer science is that it became more mathematical to become more scientific.


How could we apply different epistemes to computer science? Tomas proposes a number of directions.

Classical episteme
Foucault described the classical episteme where museums are just collecting species, and not really do anything with it. In his paper he writes: “In programming language research, many novel ideas that defy mathematization are left out because they are too “messy” to be considered through the predominant formal perspective. If the episteme made us seek relationships, those would all became within the realm of computer science.”

Common Law
The idea of legal ruling is that court’s base their verdicts on other verdicts, reflecting the basic idea that actions speak louder than words. In programming, Coding Kata’s could be related, where practice is shared and there aren’t real rules.

Renaissance episteme
Here, difference areas were studied, for example, an animal’s habitat, food and its use in magic. Maybe modern computer science have something to gain from the Renaissance episteme centred around resemblances, and consider the theory and practice of a language together.

To close a brilliant talk, Tomas reminds us that in the 18th century the French Academy of Sciences denied the falling of meteorites, because of all the associated superstition. But now, we have learned to separate the meteorites and the beliefs. What things are we missing by combining things, or separating things that belong together.


The paper is here: