The author said LLM helps. Let's lynch him!
Some omissions during initial development may have a very long tail of negative impact - obvious examples are not wiring in observability into the code from the outset, or not structuring code with easy testing being an explicit goal.
Presence of a lab notebook allows me to write better documentation faster, even if I start late, and tests allow me to verify that the design doesn't drift over time.
Starting blind-mindedly for a one-off tool written in a weekend maybe acceptable, but for anything going to live longer, building the slow foundation allows things built on this foundation to be sound, rational (for the problem at hand) and more importantly understandable/maintainable.
Also, as an unpopular opinion, design on paper first, digitize later.
Sometimes a redesign of the types you relied on becomes necessary to accommodate new stuff, but that would be true under any language; otoh, the "exploratory" part of coding feels faster and easier.
I do admit that modern frameworks also help in that regard, instead of just stitching libraries together, at least for typical webdev stuff instead of more minimalistic CLI utilities or tools. The likes of Ruby on Rails, Django, Laravel, Express and even the likes of ASP.NET or Spring Boot.
This is the exact opposite of my experience. Every time I am playing around with something, I feel like I'm experiencing all of its good and none of its bad ... a honeymoon phase if you will.
It's not until I need to cover edge cases and prevent all invalid state and display helpful error messages to the user, and eliminate any potential side effects that I discover the "unknown unknowns".
Tools aside, I think everyone who has 10+ years can think of a time they had a prototype go well in a new problem space only to realize during the real implementation that there were still multiple unknown unknowns.
I usually start top-down, sketching the API surface or UI scaffold before diving into real logic. Iteration drives the process: get something running, feel out the edges, and refine.
I favor MVPs that work end-to-end, to validate flow and reduce risk early. That rhythm helps me ship production-ready software quickly, especially when navigating uncertainty.
One recent WIP: https://zero-to-creator.netlify.app/. I built it for my kid, but I’m evolving it into a full-blown product by tweaking the edges as I go.
So much this.
Get the data model right before you go live, and everything is so simple, get it wrong and be prepared for constant pain balancing real data, migrations, uptime and new features. Ask me how I know
If you're building something yourself or in a small team, I absolutely agree with everything written in the post. In fact, I'd emphasize you should lean into this sort of quick and dirty development methodology in such a context, because this is the strength of small scale development. Done correctly it will have you running circles around larger operations. Bugs are almost always easy to fix later for a small team or solo dev operation as you can expect everyone involved to have a nearly perfect mental model of the entire project, and the code itself will regardless of the messes you make tend to keep relatively simple due to Conway's law.
In larger development projects, fixing bugs and especially architectural mistakes is exponentially more expensive as code understanding is piecemeal, the architecture is inevitably nightmarishly complex (Conway again), and large scale refactoring means locking down parts of the code base so that dozens to hundreds of people can't do anything (which means it basically never happens). In such a setting the overarching focus is and should be on correctness at all steps. The economies of scale will still move things forward at an acceptable pace, even if individual developers aren't particularly productive working in such a fashion.
Basically there are things you can't avoid that are not necessarily fast (e.g. compilation, docker build, etc.) and things that you can actually control and optimize. Tests and integration tests are part of that. Learning how to write good effective tests that are quick to run is important. Because you might end up with hundreds of those and you'll be spending a lot of your career waiting for those to run. Over and over again.
Here's what I do:
- I run integration tests concurrently. My CPUs max out when I run my tests. My current build runs around 400 integration tests in about 35 seconds. Integration test means the tests are proper black box tests that hit a REST API with my server talking to a DB, Elasticsearch and Redis. Each test might require users/teams and some content set up. We're talking many thousands of API calls happening in about 35 seconds.
- There is no database cleanup in between tests. Database cleanup is slow. Each build starts with an ephemeral docker container. So it starts empty but by the time the build is over you have a pretty full database.
- To avoid test interaction, all data is randomized. I use a library that generates human readable names, email addresses, etc. Creating new users/teams is fast, recreating the database schema isn't. And because at any time there can be 10 separate tests running, you don't want this anyway. Some tests share the same read only test fixture and team. Recreating the same database content over and over again is stupid.
- A proper integration test is a scenario that is representative of what happens in your real system. It's not a unit test. So the more side effects, the better. Your goal is to find anything that might break when you put things together. Finding weird feature interactions, performance bottlenecks, and sources of flakiness is a goal here and not something you are trying to avoid. Real users don't use an empty system. And they won't have it exclusive to themselves either. So having dozens of tests running at the same time adds realism.
- Unit tests and integration tests have different goals. With integration tests you want to cover features, not code. Use unit tests for code coverage. The more features an integration test touches, the better. There is a combinatorial explosion of different combinations of inputs. It's mathematically impossible to test all of them with an integration test. So, instead of having more integration tests, write better scenarios for your tests. Add to them. Refine them with detail. Asserting stuff is cheap. Setting things up isn't. Make the most of what you setup.
- IMHO anything in between scenario tests and unit tests is a waste of time. I hate white box tests. Because they are expensive to run and write and yet not as valuable as a good blackbox integration test. Sometimes you have to. But these are low value, high maintenance, expensive to run tests. A proper unit tests is high value, low maintenance and very fast to run (it mocks/stubs everything it needs, there is no setup cost). A proper integration tests is high value, low maintenance, and slow to run. You justify the time investment with value. Low maintenance here means not a lot of code is needed to set things up.
- Your integration test becomes a load and stress test as well. Many teams don't bother with this. I run mine 20 times a day. Because it only takes less than a minute. Anything that increases that build time, gets identified and dealt with. My tests passing gives me a high degree of certainty that nothing important has broken.
- Most of the work creating a good test is setting up the given part of a BDD style test. Making that easy with some helper functions is key. Most of my tests require users, teams, etc. and some objects. So I have a function "createTeam" with some parameters that call all the APIs to get that done. This gets called hundreds of time in a build. It's a nice one liner that sets it up. Most of my tests read like this: create a team or teams, do some stuff, assert, do more stuff, assert, etc.
- Poll instead of sleeping. A lot of stuff happens asynchronously so there is a lot of test code that waits for shit to happen. I use kotest-assertions which has a nice "eventually" helper that takes a block and runs that until it stops throwing exceptions (or times out). It has configurable interval that it tries again that backs off with increasing sleep periods. Most things just take a second or two to happen.
- If your CPUs are not maxed out during the test, you need to be running more tests, not less. Server tests tend to be IO blocked, not CPU blocked. And your SSD is unlikely to be the bottleneck. We're talking network IO here. And it's all running on localhost. So, if your CPUs are idling, you can run more tests and can use more threads, co-routines, whatever.
- Get a decent laptop and pay for fast CI hardware. It's not worth waiting 10 minutes for something that could build in about a minute. That speedup is worth a lot. And it's less likely to break your flow state.
This stuff is a lot easier if you engineer and plan for it. Introducing concurrently running tests to a test suite that isn't ready for it can be hard. Engineering your tests to be able to support running concurrently results in better tests. So if you do this properly, you get better tests that run faster. Win win. I've been doing this for a while. I'm very picky about what is and isn't a good test. There are a lot of bad tests out there.
This is a pretty easy and natural thing to do because it's quite easy to go "I shaved 2.5 minutes off my build" whereas "I increased the maintainability and realism of our tests, adding 3 minutes to the build" is a much more nebulous and hard thing to justify even when it saves you time in the long run.
>A proper unit tests is high value, low maintenance and very fast to run (it mocks/stubs everything it needs, there is no setup cost).
^^ this is a case in point, mocks and stubs do make fast running test code but they commensurately decrease the realism of that test and increase its maintenance cost.
Here are some things I have learned:
* Learn one tool well. It is often better to use a tool that you know really well than something that on the surface seems to be more appropriate for the problem. For extremely large number of real-life problems, Django hits the sweet spot.
Several times I have started a project thinking that maybe Django is too heavy, but soon the project outgrew the initial idea. For example, I just created a status page app. It started as a single file Django app, but luckily realized soon that it makes no sense to go around Djangos limitations.
* In most applications that fit the Django model, data model is at the center of everything. Even if making a rought prototype, never postpone data model refactoring. It just becomes more and more expensive and difficult to change over time.
* Most applications don't need to be single-page apps nor require heavy frontend frameworks. Even for those that can benefit from it, traditional Django views is just fine for 80% of the pages. For the rest, consider AlpineHJS/HTMX
* Most of the time, it is easier to build the stuff yourself. Need to store and edit customers? With Django, you can develop simple a CRM app inside your app in just few hours. Integrating commercial CRM takes much more time. This applies to everything: status page, CRM, support system, sales processes, etc. as well as most Django apps/libraries.
* Always choose extremely boring technology. Just use python/Django/Postgres for everything. Forget Kubernetes, Redis, RabbitMQ, Celery, etc. Alpine/HTMX is an exception, because you can avoid much of the Javascript stack.
In my day job I work with Go and while it's fine, I end up writing 10x more code for simple API endpoints and as soon as you add query parameters for filtering, pagination, etc. etc. it gets even longer. Adding a permissions model on top does similar. Of course there's a big performance difference but largely the DB queries dominate performance, even in Python, at least for most of the things I do.
This doesn't mean writing tests for everything, and sometimes it means not writing tests at all, but it means that I do my best to make code "testable". It shouldn't take more time to do this, though: if you're making more classes to make it testable, you're already messing it up.
This also doesn't mean compromising in readability, but it does mean eschewing practices like "Clean Code". Functions end up being as large as they need to be. I find that a lot of people doing especially Ruby and Java tend to spend too much time here. IMO having lots of 5-line functions is totally unnecessary, so I just skip this step altogether.
It also doesn't mean compromising on abstractions. I don't even like the "rule of three" because it forces more work down the line. But since I prefer DEEP classes and SMALL interfaces, in the style of John Ousterhout, the code doesn't really take longer to write. It does require some thinking but it's nothing out of the ordinary at all. It's just things that people don't do out of inertia.
One thing I am a bit of hardliner about is scope. If the scope is too large, it's probably not prototype or MVP material, and I will fight to reduce it.
EDIT: kukkeliskuu said below "learn one tool well". This is also key. Don't go "against the grain" when writing prototypes or first passes. If you're fighting the framework, you're on the wrong path IME.
anonzzzies•1h ago
mattmanser•1h ago
So that's not a problem with this process itself. You're describing problems with managers, and problems with developers being unable to handle bad managers.
Even putting aside the manager's incompetence, as a developer you can mitigate this easily in many different ways, here's a few:
- Just don't show it to management
- Deliberately make it obviously broken at certain steps
- Take screen shots of it working and tell people "this is a mockup, I still have to do the hard work of wiring it up"
It's all a balancing act of needing to get feedback from shareholders and managing expectation. If your management is bad, you need to put extra work into managing expectations.
It's like the famous duck story, from Jeff Atwood (see jargon number 4), sometimes you have to manage your managers:
https://blog.codinghorror.com/new-programming-jargon/
anonzzzies•45m ago