In the merry software world, the challenges of improving our potential replacements are far too interesting and satisfying to leave room for any worry about consequences! :)
On a slightly related note… Competitive programming is surprisingly good at teaching you the right set of techniques to make sure your code is correct. You won’t progress beyond certain stage unless you pick them up.
Appropriate abstractions (i.e., "idiomatic code for the language and codebase") make program verification easy. If you are hand-weaving an appropriately-abstracted program, there's little benefit to thinking about loop invariants and pre-post conditions, since they don't exist at that level of generality: correct proofs follow directly from correct code.
I highly recommend reading the book, it explains concept of writing idiomatic code way better than I ever could.
How do you idiomatically write a loop to iterate over signed ints from i to j (inclusive) in increasing order, given i <= j?
What does that loop do when j is INT_MAX?
It's easy to think that's what do/while is for, but it turns out to be really hard to do the increment operation after the conditional, in general. What you really want is a loop structure with the conditional in the middle, and the only general purpose tool you get for that is a break. C (or any language with similar syntax) really doesn't have an idiom for doing this correctly.
int i = start;
do thing_with(i) while (i++ <= end);
You could write functions to do the update and return the old value so you could use them in the same way, but I don't like this either. This is mostly because it orders the termination check and the update logic the wrong way around. If there's IO involved in checking for the next thing, for example, side effects of that unnecessary operation might interfere with other code.
You could resolve that by moving the termination check into the update logic as well, but now you're seriously complecting what should be independent operations. I don't think the tradeoff is there versus just using a break. But mostly, this is a self-inflicted problem in C's language constructs and idioms. I just don't have this problem in many other languages, because they provide end-inclusive looping constructs.
Code for which there is not even a toehold for an imagined proof might be worth cordoning off from better code.
Easier said than done. It is certainly feasible on greenfield projects where all the code is written by you (recently), and you have a complete mental model of the data layout and code dependencies. It's much harder to prove stuff this way when you call foo(), bar() and baz() across unit boundaries, when they modify global state and are written by different developers.
1. Progressively reducing the number of holes in your invariants
2. Building them such that there's a pit of success (engineers coming after you are aware of the invariants and "nudged" in the direction of using the pathways that maintain them). Documentation can help here, but how you structure your code also plays a part (and is in my experience the more important factor)
If, instead, you make the global variable immutable, make the state variable into a function argument, or even wrap the mutable global state in some kind of helper class, then you only need to know what the callers of certain functions are doing. The visibility of those functions can be limited. Caller behavior can be further constrained with assertions inside the function. All of these (can) make it easier to prove that the reader is correct.
I think most programmers already do this, actually; they just don't think of their decisions this way.
I think this reinforces the article's point.
Code like this is much more likely to contain bugs and be harder to maintain without introducing more bugs, than programs written from the outset with this goal of "provability".
I went in with 3 tickets in mind to fix. I found half a dozen more while I was down there, and created 2 myself. I don't know if I got off easy or I was distracted and missed things.
The other project I'm working on is not dissimilar. Hey did you guys know you have a massive memory leak?
Once cancer has metastasized, the treatment plans are more aggressive and less pleasant. The patient can still be saved in many cases, but that depends on a lot of external factors.
In addition to pre-conditions and post-conditions, I would like to emphasize that loop invariants and structural induction are powerful techniques in CS proofs. https://en.wikipedia.org/wiki/Loop_invariant , https://en.wikipedia.org/wiki/Structural_induction
These notes from UofT's CSC236H are on-topic: https://www.cs.toronto.edu/~david/course-notes/csc236.pdf#pa...
So more practical type systems can be viewed as "little" theorem provers in the sense the author is describing. "Thinking in types" so to speak, is mathematically equivalent to "writing little proofs in your head". I would also add that merely using types is not equivalent to "thinking in types" as one would be required to do writing a sufficiently advanced Haskell program.
Having an advanced type system does seem to encourage this sort of informal proof oriented thinking more than imperative and somewhat typeless languages do, though, since preconditions and postconditions and loop invariants and inductive proofs on loops are things you have to do yourself entirely in your head.
I also think things get complicated with holding things constant as a necessity. Often times, the best way to find a bug is to ask not "how was this possible?" but "why would another part of the system modify this data?"
Obviously, if you can make things reference capable data, you should do so. But, all too often "this data being X" has a somewhat self evident explanation that can help. And just "copy so I don't have to worry about it" lands you squarely in "why is my copy out of date?"
For embedded control I'm finding the blackboard pattern with single assignment to be super useful.
I notice you didn't really talk much about types. When I think of proofs in code my mind goes straight to types because they literally are proofs. Richer typed languages move even more in that direction.
One caveat I would add is it is not always desirable to be forced to think through every little scenario and detail. Sometimes it's better to ship faster than write 100% sound code.
And as an industry, as much as I hate it, the preference is for languages and code that is extremely imperative and brittle. Very few companies want to be writing code in Idris or Haskell on a daily basis.
I learn more from these concrete case studies than from general principles (though I agree those are important too).
One of my most recent bugs was a crash bug in a thread-pool that used garbage-collected objects (this is in C++) to manage network connections. Sometimes, during marking, one of the objects I was trying to mark would be already freed, and we crashed.
My first thought was that this was a concurrency problem. We're supposed to stop all threads (stop the world) during marking, but what if a thread was not stopped? Or what if we got an event on an IO completion port somehow during marking?
I was sure that it had to be a concurrency problem because (a) it was intermittent and (b) it frequently happened after a connection was closed. Maybe an object was getting deleted during marking?
The only thing that was weird was that the bug didn't happen under stress (I tried stress testing the system, but couldn't reproduce it). In fact, it seemed to happen most often at startup, when there weren't too many connections or threads running.
Eventually I proved to myself that all threads were paused properly during marking. And with sufficient logging, I proved that an object was not being deleted during marking, but the marking thread crashed anyway.
[Quick aside: I tried to get ChatGPT to help--o3 pro--and it kept on suggesting a concurrency problem. I could never really convince it that all threads were stopped. It always insisted on adding a lock around marking to protect it against other threads.]
The one thing I didn't consider was that maybe an object was not getting marked properly and was getting deleted even though it was still in use. I didn't consider it because the crash was in the marking code! Clearly we're marking objects!
But of course, that was the bug. Looking at the code I saw that an object was marked by a connection but not by the listener it belonged to. That meant that, as long as there was a connection active, everything worked fine. But if ever there were no connections active, and if we waited a sufficient amount of time, the object would get deleted by GC because the listener had forgotten to mark it.
Then a connection would use this stale object and on the next marking pass, it would crash.
function simplifyTree(root: Node): Node {
let newChildren = [] as Array<Node>;
for (const child of root.children) {
const simplifiedChild = simplifyGraph(child);
if (shouldContract(simplifiedChild)) {
for (const grandChild of simplifiedChild.children) {
newChildren.push(grandChild);
}
} else {
newChildren.push(simplifiedChild);
}
}
root.children = newChildren;
return root;
}
It made me think a lot more about every line of code I wrote and definitely helped me become a better programmer.
Developers who still "think through problems" instead of just managing agents to solve problems on their behalf will be obsoleted in the next 2 years.
I especially remember how learning the equivalence of recursion and induction immediately eliminated the “frustrated trial and error” approach for making recursive algorithms that work.
Several programming languages and libraries support Design-by-Contract (https://en.wikipedia.org/wiki/Design_by_contract) which lets you specify preconditions, postconditions, and invariants directly in your code.
Those predicates can be checked in various ways (depending on how deeply Design-by-Contract is supported) to help you know that your code is working correctly.
Ada supports Design-by-Contract as part of the language: https://learn.adacore.com/courses/intro-to-ada/chapters/cont...
SPARK extends Ada Design-by-Contract into full proofs: https://learn.adacore.com/courses/intro-to-spark/index.html
Rust has the Contracts crate: https://docs.rs/contracts/latest/contracts/
Other programming languages have various levels of support or libraries for Design-by-Contract: https://en.wikipedia.org/wiki/Design_by_contract#Language_su...
Many Design-by-Contract implementations are nicer than standard assertions because they help to express intent better and easily refer to the 'old' value of a parameter to verify desired results.
What was helping me was that before I wrote a single expression, I thought about it carefully in my head and where it would lead before putting pen to paper, because I didn't want to make a bunch of messy scratch out marks on it. Or, sometimes, I'd use a healthy amount of throwaway scratch paper if allowed. Once my path was fully formed in my head I'd begin writing, and it resulted in far fewer mistakes.
I don't always take this approach to writing code but often I do formulate a pretty clear picture in my head of how it is going to look and how I know it will work before I start.
Also farthest in math/physics I got was intro to quantum mechanics which the multiple-pages to solve a problem lost me
Being a good programmer... I don't have a degree so I've never really tried to get into FAANG. I also am aware after trying Leetcode, I'm not an algorithm person.
What's funny is at my current job which it's a multi-national huge entity thing but I have to try and push people to do code reviews or fix small errors that make something look bad (like a button being shorter than an input next to it).
I am self-aware with true skill, I can make things, but I don't think I'd ever be a John Carmack. If you follow a framework's pattern are you a good developer? I can see tangible metrics like performance/some slow thing, someone better makes it faster.
Recently someone forked a repo of a hardware project I made. It's fun watching them change it, to understand what I wrote and then change it to fit their needs.
When I see my old code I do recognize how it was verbose/could be much simpler.
When done well, every test you write before you see it fail and then you write the barest amount of code that you think will make it pass is a mini-proof. Your test setup and assertions are what cover your pre/post conditions. Base cases are the invariant.
The key here is to be disciplined, write the simplest test you can, see the test fail before writing code, write the smallest amount of code possible to make the test pass. Repeat.
The next level is how cohesive or tightly coupled your tests are. Being able to make changes with minimal test breakage "blast radius" increases my confidence of a design.
You'll know quickly where you're going wrong because if you struggle to write the test first, it's a symptom of a design issue for example.
That being said, I wouldn't use it as dogma, like everything else in CS, it should be used at the right time and in the right context.
I tend to go the other way, I don't use TDD when I am playing around/exploring as much as when I am more confident in the direction I want to go.
Leaving a failing test at the end of the day as a breadcrumb for me to get started on in the morning has been a favored practice of mine. Really helps get the engine running and back into flow state first thing.
The dopamine loop of Red -> Green -> Refactor also helps break through slumps in otherwise tedious features.
The most complex piece of code I have ever written, as a relevant story, took me a month to prove to everyone that it was correct. We then sent it off to multiple external auditors, one of which who had designed tooling that would let them do abstract interpretation with recursion, to verify I hadn't made any incorrect assumptions. The auditors were confused, as the code we sent them didn't do anything at all, as it had a stupid typo in a variable name... I had never managed to figure out how to run it ;P. But... they found no other bugs!
In truth, the people whom I have met whom are, by far, the worst at this, are the people who rely on testing, as they seem to have entirely atrophied the ability to verify correctness by reading the code and modeling it in some mathematical way. They certainly have no typos in their code ;P, but because a test isn't a proof, they always make assumptions in the test suite which are never challenged.
My thought here is that since proofs are logic and so is code you can't have a proof that can't be represented in code. Now admittedly this might look very different than typical say JUnit unit tests but it would still be a test validating logic. I am not saying every system is easily testable or deterministic but overall, all else equal, the more tested and testable a system is the better it is.
IME things that are very hard to test are often just poorly designed.
Not discounting the value of tests: we throw a bunch of general and corner cases at the function, and they will ring the alarm if in the future any change to the function breaks any of those.
But they don't prove it's correct for all possible inputs.
As an example, take a sorting procedure that sorts an arbitrarily long list of arbitrarily long strings. You can't establish just through testing that it will produce a correctly sorted output for all possible inputs, because the set of possible inputs is unbounded. An algorithmic proof, on the other hand, can establish that.
That is the crucial difference between reasoning about code versus merely testing code.
I would add Tests can be probabilistically exhaustive (eg property based testing) and answer questions beyond what proof based reasoning can provide ie. is this sorting of arbitrary strings efficient and fast?
Actually, a test _is_ a proof! Or more specifically, a traditional test case is a narrow, specific proposition. For example, the test `length([1, 2, 3]) = 3` is a proposition about the behavior of the `length` function on a concrete input value. The proof of this proposition is _automatically generated_ by the runtime, i.e., the proof that this proposition holds is the execution of the left-hand side of the equality and observing it is identical to the right-hand side. In this sense, the runtime serves as an automated theorem prover (and is, perhaps, why test cases are the most accessible form of formal reasoning available to a programmer).
What we colloquially consider "proof" are really more abstract propositions (e.g., using first-order logic) that require reasoning beyond simple program execution. While the difference is, in some sense, academic, it is important to observe that testing and proving (in that colloquial sense) are, at their core, the same endeavor.
assert random() != 0.
QEDHaving your invariants and pre/post conditions correct is not enough. You also need to do the right thing. For example, you have a function that adds two durations in the form hh:mm:ss, you have mm <= 60 and ss <= 60 as an invariant, testing it is a good thing, but it won't tell you that your addition is correct. Imagine your result is always 1s too much, you can even test associativity and commutativity and it will pass. Having these correct is necessary but not sufficient.
Problem is, when you write tests first, especially tight, easy to run unit tests, you will be tempted to write code that pass the tests, not code that does the right thing. Like throwing stuff at your tests and see what sticks.
I much prefer the traditional approach of first solving the problem the best you can, which may of may not involve thinking about invariants, but always with the end result in mind. And only when you are reasonably confident with your code, you can start testing. If it fails, you will have the same temptation to just pass the test, but at least, you wrote it at least once without help from the tests.
Maybe that's just me, but when I tried the "tests first" approach, the end result got pretty bad.
Tests do not ensure that your functions are correct; they ensure that you are alerted when their behavior changes.
Second, TDD is a way to force dogfooding: having to use the functions you’re going to write, before you write them, helps you design good interfaces.
I agree with that part and I am not against tests, just the idea of writing tests first.
> helps you design good interfaces
I am sure plenty of people will disagree but I think testability is overrated and leads to code that is too abstract and complicated. Writing tests first will help you write code that is testable, but everything is a compromise, and to make code more testable, you have to sacrifice something, usually in the form of adding complexity and layers of indirection. Testability is good of course, but it is a game of compromises, and for me, there are other priorities.
It makes sense at a high level though, like for public APIs. Ideally, I'd rather write both sides at the same time, as to have a real use case rather than just a test, and have both evolve together, but it is not always possible. In that case, writing the test first may be the next best thing.
- Write a test that demonstrates that it is doing the wrong thing
- Watch it fail
- Change the code to do the right thing
- Ensure the test passes
And in return I get regression prevention and verified documentation (the hopefully well structured and descriptive test class) for almost free.
I don't think any amount of testing absolves the programmer from writing clear, intention-revealing code that is correct. TDD is just a tool that helps ensure the programmers understanding of the code evolves with code. There have been so many times where I write code and expect a test to fail/pass and it doesn't. This detects subtle minute drift in understanding.
So many communication issues on teams occur when people are using the same words to mean different things.
Just to be clear, I wasn't claiming that "communicating clearly" is a new idea in software engineering, I'm mainly commenting on how effective embracing it can be.
When doing math, pretty much every term is "load-bearing" in that arguments will make use of specific aspects of a concept and how it relates to other concepts.
If you look at most graduate-level math textbooks or papers, they typically start with a whole bunch of numbered definitions that reference each other, followed by some simple lemmas or propositions that establish simple relationships between them before diving into more complex theorems and proofs.
The best software projects I've seen follow a roughly similar pattern: there are several "core" functions or libraries with a streamlined API, good docs, and solid testing; on top of that there are more complex processes that treat these as black-boxes and can rely on their behavior being well-defined and consistent.
Probably the common thread between math and programming is both lean heavily on abstraction as a core principle.
Tossing out a few observations:
1. People make mistakes. Your strategy needs to account for that (resilient runtime, heavy type-checking, convenient and often-used REPL, etc).
2. Above a certain project size, nobody remembers everything. Your strategy needs to account for that (excellent multifaceted documentation, disallow long-range interactions in your code, etc).
3. Dependencies have a vastly higher cost than you expect, even in the short term. Plan for that (vendor more things, in-house more things, allocate resources to dependency management, cut scope, etc).
I could go on. The core point is that certain properties of projects are "plainly" true to most people who have been around any length of time. I don't think we're yet at a point where we can often predict anything meaningful about some specific new technology, but a mental framework of "this succeeded/failed _because_ of {xyz}" helps tremendously in figuring out if/how that new idea will fit into your current workplace.
I believe programmers should learn a bit about programming language theory, mostly the bits about what is a language. Then how to recognize the things behind the labels, and how they morph (either from an operation or from a translation between contexts). Then it's a matter of using the knowledge to move from the environment to a DSL that can express the business rules.
Architecture is the draft for the above, defining the starting point and a direction. And design is where you start to make decisions following the plan. For someone that have an idea of the destination, they can judge both.
- In the case of search or sort algorithms, where the primary concern is the speed of computation, undesirable states would be performing unnecessary or duplicate computations.
- In the case of encryption algorithms, undesirable states would be those that leak encrypted data.
- In the case of an order shipping and fulfillment system, an undesirable state would be marking an order as fulfilled when not all of the items have been delivered.
The more care that is taken to prevent undesirable states, the more the program takes on an algorithmic style. And the only way you can be sure that those undesirable states are impossible is to think in terms of proofs and invariants.
---
Let me elaborate: there is a huge history of dubious allegations of what constitutes "a real programmer" or stuff that makes you "a better programmer".
Combobulation and descombobulation of McGuffins is often the best analogy though. Not to dismerit other kinds of thinkings, but already dismeriting them, that's what this is all about. When in doubt, showing the code is often what works.
That's all there is.
You want to get better at something? Do it all day every day.
I'm curious though. When you're coding, do you actually pause and think through the logic first, or is it more like writing and fixing along the way?
hiAndrewQuinn•9h ago
Jon Bentley, the writer of Programming Pearls, gave the task of writing an ordinary binary search to a group of professional programmers at IBM, and found that 90% of their implementations contained bugs. The most common one seems to have been accidentally running into an infinite loop. To be fair, this was back in the day when integer overflows had to be explicitly guarded against - but even then, it's a surprising number.
[1]: https://hiandrewquinn.github.io/til-site/posts/binary-search...
qsort•9h ago
noosphr•14m ago
That they work for binary search is a very strong signal to people not familiar with them that they work for nearly everything (they do).
fragmede•8h ago
https://research.google/blog/extra-extra-read-all-about-it-n...
hiAndrewQuinn•8h ago
sedatk•8h ago
None4U•6h ago
JohnKemeny•5h ago
kamaal•6m ago
Every time I needed to write something algorithmically demanding, I could do it in a day or two. Im not into Leetcoding, or competitive coding.
Most regular everyday programmers can work these things out in one or two workdays.
Its definitely not like the competitive programmers say, like if you aren't into this full time, at the time you need to write something you won't have access to time, internet and even an IDE and have to write the code in a Google doc(which needs internet connection, when I pointed this out in the interviews they didn't like it).
groby_b•3h ago
When do you write code that doesn't need to search? (Unless you hand it all to the database, in which case... sure, you're good)
powersnail•7m ago
monkeyelite•5h ago