(defn apply-shipping-rules [order]
(cond-> order
(and (= :premium (:customer-type order))
(> (:order-total order) 100))
(assoc :shipping-cost 0)))
Since there is an equivalence between types and propositions, the Clojure program also models a "type", in the sense that the (valid) inputs to the program are obviously constrained by what the program can (successfully) process. One ought, in principle, to be able to transform between the two, and generate (parts of) one from the other.
We do a limited form of this when we do type inference. There are also (more limited) cases where we can generate code from type signatures.
I think op's point is that the Clojure code, which lays the system out as a process with a series of decision points, is closer to the mental model of the domain expert than the Haskell code which models it as a set of types. This seems plausible to me, although it's obviously subjective (not all domain experts are alike!).
The secondary point is that the Clojure system may be more malleable - if you want to add a new state, you just directly add some code to handle that state at the appropriate points in the process. The friction here is indeed lower. But this does give up some safety in cases where you have failed to grasp how the system works; a type system is more likely to complain if your change introduces an inconsistency. The cost of that safety is that you have two representations of how the system works: the types and the logic, and you can't experiment with different logic in a REPL-like environment until you have fully satisfied the type-checker. Obviously a smarter system might allow the type-checker to be overridden in such cases (on a per-REPL-session basis, rather than by further editing the code) but I'm not aware of any systems that actually do this.
That's all certainly possible. But the same could be said of Python or JS. So if the big point here is "we can model business decisions as code!", I fail to see the innovation because we've been doing that for 50 years. Nothing unique to Clojure.
You could even do it Haskell if you want: just store data as a Map of properties and values, emulating a JS object.
Most mainstream languages are very poorly equipped to do relational modeling. ORMs are a disaster (object-relational mismatch) and you don't necessarily need an actual database running in the background.
Clojure's approach is superior to the class hierarchy or sum type solution for this sort of very loose business domain modelling, for the reasons stated in the article, but it's also a local optima, and so is the "fat struct" solution (which is the statically typed equivalent). Even entity component systems are but a shadow of the relational model.
I’m glad people seem to have left behind the feeling that relational model is bad during the NoSQL era.
Relational databases still lock you into a specific design, and trying to work contrary to how your application was designed 10-15 years ago leads to terrible performance, high costs, and bugs galore.
It may be better than other options, but it's still not exactly a solved problem.
Where I disagree with the article is on refactoring. It's identically hard both ways. Migrating to new business rules while simultaneously running the old and new system is the hard part. I don't find static typing helps or hurts me in particular. Compiler warnings are useful, but my unit tests catch the dynamic parts as well. Either way a lot breaks and often needs temporary scaffolding between the versions.
Everything more complex than those building block aren't in reality a Type.
Reality doesn't consiste of: X type made up of these primitives and other defined sub-types and let's hide the primitives as far down as we can.
It's instead primitives arranged X wise.
Or mapped a little better to programming terminology: A Schema.
It's about having the mental model that complex types can be useful as an abstraction but they aren't real and aren't worth fighting for or defending.
Types are for devs, devs aren't for types.
The endgame of this problem always turns into some sort of “log of events” with loosely coupled subscribers.
A single state machine suffers from a combinatorial explosion of states as it has to handle every corner case, combinations of every scenario, etc…
What if a single shopping basket contains both a digital good and a physically shipped one? What if some items are shipped separately and/or delayed? Etc…
Instead the business rules are encoded into smaller state machines that listen to events on the log and pay attention only to relevant events. This avoids much of the complexity and allows the OOP types to remain relatively clean and stable over time.
Now the “digital goods” shipping handler can simply listen to events where “delivery=authorized” and “type=digital”, allowing it to ignore how the payment was authorised (or just store credit!) and ignore anything with physical shipping constraints.
It then writes an event that marks that line item in the shopping cart as “delivered”, allowing partial cancellations later, etc…
"classic" sql databases are still safer for many things then mongodb.
it is easier to do away with types and constraints, but in many cases they do end up being important safeguards
dang•3h ago
The Big OOPs: Anatomy of a Thirty-Five Year Mistake - https://news.ycombinator.com/item?id=44612313 - July 2025 (181 comments)
The Big Oops: Anatomy of a Thirty-Five-Year Mistake [video] - https://news.ycombinator.com/item?id=44596554 - July 2025 (91 comments)