Soap Bubbles, Honeycombs, Snowflakes, Oranges on top of Each Other, and Programming Language Compilers
In 1887, Lord Kelvin, notable for his achievements in thermodynamics, asked himself the following question: what is the most efficient bubble foam? To be precise, what is the structure of bubbles of equal volume with the least surface area? Kelvin proposed a slightly curved variant of the bitruncated cubic honeycomb, inspired by the honeycombs of bees. This structure, Kelvin thought, must be the most efficient way to construct a foam, like the one you’d observe when blowing soap bubbles.
An aside about bees. Two hypotheses about the hexagonal nature of bee cells have been proposed. One says that its efficiency at filling space while minimizing surface area (i.e., resource use) is why this design evolved. Another says that, far from geometric perfection, it is simply the way cells get deformed when bees try to squeeze their cells into each other—and observes that individual cells constructed outside the grid are irregular and not at all space-efficient.
For a hundred years, mathematicians considered Kelvin’s solution to be optimal. It seemed obvious, but no one could produce a formal proof of its efficiency. Then, in 1993, physicist Denis Weaire and his student Robert Phelan made a computer simulation and discovered a different structure, now bearing their names, which was slightly more efficient. This structure is non-obvious in that it employs two different kinds of cells, which nevertheless fulfill the formal criteria by having equal volume.
Something to think about when considering the similar ball stacking problem, which is even more obvious on first look. In 1611, the astronomer Johannes Kepler conjectured that the face-centered cubic packing—also known as the way to stack balls that results naturally from putting one ball on top of the other, in the way seen at any supermarket stack of oranges—is the most efficient packing. Slightly less than 26% of space is left unfilled by this arrangement. There are other known ball stackings with the same efficiency, but none that are more efficient. There’s a funny story about Kepler’s essay, On the Six-Cornered Snowflake: Kepler didn’t have money to buy his friend a Christmas present, because his employer hadn’t paid him for some time. Instead, Kepler wrote his friend a meditation on nothing, since he had nothing to give, in which he considers the closest things to nothing that were then known, such as snowflakes.
In 1998, mathematician Thomas Hales produced a proof of Kepler’s conjecture. This proof is widely accepted by mathematicians, but since it is a computer-assisted proof by exhaustion, meaning it runs through all the possibilities and checks to see if any is better than Kepler’s, it’s very hard to follow and to verify. This is why mathematicians remain skeptical to computer-aided proofs and prefer ones that aren’t. Another famous theorem, the four-colorem theorem, which states that any map can be colored with no more than four different colors such that no adjacent regions share the same color, was also proven by a computer-assisted proof by exhaustion.
But mathematicians aren’t Luddites. They don’t distrust computers by reflex. In fact, there is a theoretical result that is grounds for optimism about the future of mathematical proofs. Known as the Curry-Howard isomorphism, it establishes a correspondence between computer programs and formal proofs. The Coq proof assistant has a programming language designed with this isomorphism in mind. One of its applications is programming language semantics. Semantics is the field that studies meaning, and programming language semantics is about how compilers and interpreters—the programs that translate code written in a higher-level programming language into machine code—actually translate. Does the compiler translate the code as intended, or could there be bugs that make it mistranslate? Coq can help verifying that a compiler translates the code exactly as intended.