It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".
This sounds like every technical job interview.
Your perception is still warranted. It was clear enough to me what all of that meant, but I was well aware that static is an awkward, highly overloaded term, and I already had the sense that all this boilerplate is a negative.
Sounds crazy, but I usually end up doing that, anyway, as I work.
Another tip that has helped me, is to add code documentation to inline code, after it’s written (I generally add some, but not much inline, as I write it. Most of my initial documentation is headerdoc). The process of reading the code, helps cement its functionality into my head, and I also find bugs, just like he mentions.
> Sounds crazy, but I usually end up doing that, anyway, as I work.
This doesn't sound crazy to me. On the contrary, it sounds crazy not to do it.
How many bugs do we come across where we ask rhetorically, "Did this ever work?" It becomes obvious that the programmer never ran the code, otherwise the bug would have exhibited immediately.
Writing Solid Code is over 30 years old, and has techniques that are still completely relevant, today (some have become industry standard).
Reading that, was a watershed in my career.
Interestingly there's a post from the last day arguing that "Making invalid state unpresentable" is harmful[0], which I don't think I agree with. My experience is that bugs hide in crevices created by having invalid states remain representable and are often caused by the increased cognitive load of not having small reasoning scopes. In terms of reading code to find bugs, having fewer valid states and fewer intersections of valid state makes this easier. With well-define and constrained interfaces you can reason about more code because you need to keep fewer facts in your head.
electric_muse's point in a sibling comment "The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection." is a good case study in this too. Having poorly scoped state boundaries means this reasoning is hard, here too making invalid states unpresentable and interfaces constrained helps.
The author doesn't grasp how much of what they've written amounts to flexing their own outlier intelligence; they must sincerely believe the average programmer is capable of juggling a complex 500 line program in their heads.
You can manipulate values in a debugger to make it go down any code path you like.
There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.
I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".
Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.
electric_muse•6h ago
What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing. Reading code can help, sure, but treating it as some kind of superpower is survivorship bias.
vbezhenar•5h ago
1. Code defensively, but don't spend too much time on handling error conditions. Abort as early as possible. Keep enough information to locate error later. Log relevant data. For example just put `Objects.requireNonNull` for public arguments which must not be null. If they're null, exception will be thrown which should abort current operation. Exception stacktrace will include enough information to pinpoint the bug location and fix it later.
2. Monitor for these messages and act accordingly. My rule of thumb: zero stack traces in logs. Stacktrace is sign of bug and should be handled one way or another.
With bug prevention, it's important to stay reasonable, there's only so much time in the world and business people usually don't want to pay 10x to eliminate 50% bugs. And handling theoretical error conditions also adds to the complexity of codebase and might actually hurt its maintainability.
voihannena•5h ago
notpachet•4h ago
coxley•5h ago
This can be combined with a more strategic approach like: https://mitchellh.com/writing/contributing-to-complex-projec...
jcgrillo•4h ago
alphazard•3h ago
Yes, something strange happens in large systems, where it's better to assume they work the way they are supposed to, rather than deal with how they (currently) work in reality.
It's common in industry for (often very productive) people to claim the "code is the source of truth", and just make things work as bluntly as possible. Sprinkling in special cases and workarounds as needed. For smaller systems that might even be the right way about it.
For larger systems, there will always be bugs, and the only way for the number of bugs to tend to zero is for everyone to have the same set of strong assumptions about how the system is supposed to behave. Continuously depending on those assumptions, and building more and more on them will eventually reveal the most consequential bugs, and fixing them will be more straightforward. Once they are fixed, everything assuming the correct behavior is also fixed.
In large systems, it is worse to build something that works, but depends on broken behavior than to build something that doesn't work, but depends on correct behavior. In the second case you basically added an invariant check by building a feature. It's a virtuous process.
Amorymeltzer•2h ago
>If you want a single piece of advice to reduce your bug count, it’s this: Re-read your code frequently. After writing a few lines of code (3 to 6 lines, a short block within a function), re-read them. That habit will save you more time than any other simple change you can make.
So, more focused on a ground-up, de novo thing as opposed to inheriting or joining a large project. Different models of "code" and different strokes for different folks, I guess, but the big takeaway I like from that initial piece is:
>I spent the next two years keeping a log of my bugs, both compile-time errors and run-time errors, and modified my coding to avoid the common ones.
It was a different era, but I feel like the act of manually recording specific bugs probably helps ingrain them better and help you avoid them in the future. Tooling has come a long way, so maybe it's less relevant, but it's not a bad thing to think about.
In the end, a lot of learning isn't learning per se, but rather learning where the issues are going to be, so you know when to be careful or check something out.
mamcx•2h ago
There is a layer above this: To understand, really, what are the requirements and to check if are delivered. You can have perfect code that do nothing of consequence. Is the equivalent of `this function is not used by anything` but more macro.
But of course, the problem is to decipher the code, where what you say helps a ton.
geocar•2h ago
Yes, yes, why bother reading your code at all? After all, eventually 15 years will pass whether you do anything or not!
I think if you read it while it's 500 lines, you'll see a way to make it 400. Maybe 100 lines. Maybe shorter. As this happens you get more and more confident that these 50 lines are in fact correct, and they do everything that 500 lines you started with do, and you'll stop touching it.
Then, you've got only 1,5m lines of code after 15 years, and it's all code that works: that you don't have to touch. Isn't that great?
Comparing that to the 15m lines of code that doesn't work, that nobody read, just helps make the case for reading.
> What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing.
Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.
> sure, but treating it as some kind of superpower is survivorship bias
That's the kind of bias we want: Programs that run for 15 years without changing are the ones we think are probably correct. Programs that run for 15 years with lots of people trying to poke at them are ones we can have even more confidence in.
mtVessel•8m ago
| Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.
C is statically and fairly strongly typed. Hard to tell if you're arguing for or against the statement you're responding to.
ratmice•2h ago
It is also valuable to both form a hypothesis of how you think the code works, and then measure in the debugger how it actually works. Once you understand how these differ, it can be helpful in restructuring the code so it's structure better reflects it's behavior.
Time spent reading code is almost never fruitless.