(defn apply-shipping-rules [order]
(cond-> order
(and (= :premium (:customer-type order))
(> (:order-total order) 100))
(assoc :shipping-cost 0)))But a map is also just one solution. You could use a fat struct as well, or implement a ad-hoc relational database (like what entity component systems really are)
Since there is an equivalence between types and propositions, the Clojure program also models a "type", in the sense that the (valid) inputs to the program are obviously constrained by what the program can (successfully) process. One ought, in principle, to be able to transform between the two, and generate (parts of) one from the other.
We do a limited form of this when we do type inference. There are also (more limited) cases where we can generate code from type signatures.
I think op's point is that the Clojure code, which lays the system out as a process with a series of decision points, is closer to the mental model of the domain expert than the Haskell code which models it as a set of types. This seems plausible to me, although it's obviously subjective (not all domain experts are alike!).
The secondary point is that the Clojure system may be more malleable - if you want to add a new state, you just directly add some code to handle that state at the appropriate points in the process. The friction here is indeed lower. But this does give up some safety in cases where you have failed to grasp how the system works; a type system is more likely to complain if your change introduces an inconsistency. The cost of that safety is that you have two representations of how the system works: the types and the logic, and you can't experiment with different logic in a REPL-like environment until you have fully satisfied the type-checker. Obviously a smarter system might allow the type-checker to be overridden in such cases (on a per-REPL-session basis, rather than by further editing the code) but I'm not aware of any systems that actually do this.
That's all certainly possible. But the same could be said of Python or JS. So if the big point here is "we can model business decisions as code!", I fail to see the innovation because we've been doing that for 50 years. Nothing unique to Clojure.
You could even do it Haskell if you want: just store data as a Map of properties and values, emulating a JS object.
Most mainstream languages are very poorly equipped to do relational modeling. ORMs are a disaster (object-relational mismatch) and you don't necessarily need an actual database running in the background.
Clojure's approach is superior to the class hierarchy or sum type solution for this sort of very loose business domain modelling, for the reasons stated in the article, but it's also a local optima, and so is the "fat struct" solution (which is the statically typed equivalent). Even entity component systems are but a shadow of the relational model.
I’m glad people seem to have left behind the feeling that relational model is bad during the NoSQL era.
Relational databases still lock you into a specific design, and trying to work contrary to how your application was designed 10-15 years ago leads to terrible performance, high costs, and bugs galore.
It may be better than other options, but it's still not exactly a solved problem.
Where I disagree with the article is on refactoring. It's identically hard both ways. Migrating to new business rules while simultaneously running the old and new system is the hard part. I don't find static typing helps or hurts me in particular. Compiler warnings are useful, but my unit tests catch the dynamic parts as well. Either way a lot breaks and often needs temporary scaffolding between the versions.
And no, requirement changes don't have to cause that to happen and they don't have to wreak havoc throughout your application due to poor design decisions.
It's fine to encode rules directly into the type system, but only for rules that are known to be fixed (or at least not likely to ever change) throughout the lifetime of the project. For many business rules, however, this unfortunately doesn't apply.
Rules that are not fixed but still are a requirement for code to work/make sense still merit an explicit encoding in the type system. You can have an interpreter somewhere that makes sense of unstructured data and delegates to the right functions once it's able to parse and slap a type on it, which will be better than a function that has a bunch of conditionals laying around which at some point either force you to duplicate them or make assumptions you're calling the right functions in the right order.
But then you lose the benefits a type system offers during refactoring. When business logic does change, if it's linked to the type system, then the logic is forced to change consistently throughout your system.
Of course, you don't want to be forced to change all your code whenever any business logic changes. But you never are. Basic separation of concerns should ensure that different pieces of logic are coupled to different types, such that the blast radius of any type change is limited.
Consider the example from the article, where the PaymentStatus type winds up with a whole bunch of variants. Code that deals with the status of a payment really needs to know all the different statuses a payment could have. If you add a PendingApproval variant to PaymentStatus, your refund workflow should break, because it needs to know to cancel the approval process without issuing a payment when the order is still pending approval. Meanwhile, code that doesn't deal with payments directly can treat that type as a black box.
Everything more complex than those building block aren't in reality a Type.
Reality doesn't consiste of: X type made up of these primitives and other defined sub-types and let's hide the primitives as far down as we can.
It's instead primitives arranged X wise.
Or mapped a little better to programming terminology: A Schema.
It's about having the mental model that complex types can be useful as an abstraction but they aren't real and aren't worth fighting for or defending.
Types are for devs, devs aren't for types.
lol
The endgame of this problem always turns into some sort of “log of events” with loosely coupled subscribers.
A single state machine suffers from a combinatorial explosion of states as it has to handle every corner case, combinations of every scenario, etc…
What if a single shopping basket contains both a digital good and a physically shipped one? What if some items are shipped separately and/or delayed? Etc…
Instead the business rules are encoded into smaller state machines that listen to events on the log and pay attention only to relevant events. This avoids much of the complexity and allows the OOP types to remain relatively clean and stable over time.
Now the “digital goods” shipping handler can simply listen to events where “delivery=authorized” and “type=digital”, allowing it to ignore how the payment was authorised (or just store credit!) and ignore anything with physical shipping constraints.
It then writes an event that marks that line item in the shopping cart as “delivered”, allowing partial cancellations later, etc…
He has a interesting discussion about how this 'omniscience' is viewed as a bad thing by many, but it is also incredibly powerful for business logic, especially with heavy user interaction like games.
"classic" sql databases are still safer for many things then mongodb.
it is easier to do away with types and constraints, but in many cases they do end up being important safeguards
If the business logic changes, the internal type representation can be modified or a new module that fills the same signature but uses a different internal type can be written.
Modularity matters most. If you have to do massive refactoring of code because of a change in type representation, there’s an issue with the modularity of your design. Good modularity prevents refactoring from impacting the rest of the system unless you need to change your module — that’s something that’s true even if you have no static typing.
This is types working.
Every year someone figures out that a program can pass the most rigorous compile time type checks and yet still be wrong.
At least when you refactor your types, the compiler is going to pinpoint every line of code where you now have missing pattern checks, unhandled nulls, not enough parameters, type mismatches etc.
I find refactoring in languages like Python/JavaScript/PHP terrifying because of the lack of this and it makes me much less likely to refactor.
Even with a test suite (which you should have even when using types), it's not going to exhaustively catch problems the type system could catch (maybe you can trudge through several null errors your tests triggered but there could be many more lurking), working backwards to figure out what caused each runtime test error is ad-hoc and draining (like tracing back where a variable value came and why it was unexpectedly null), and having to write + refactor extra tests to make up for the lack of types is a maintenance burden.
Also, most test suites I see do not contain type related tests like sending the wrong types to function parameters because it's so tedious and verbose to do this for every function and parameter, which is a massive test coverage hole. This is especially true for nested data structures that contain a mixture of types, arrays, and optional fields.
I feel like I'm never going to understand how some people are happy with a test suite and figuring out runtime errors over a magic tool that says "without even running any parameters through this function, this line can have an unhandled null error you should fix". How could you not want that and the peace of mind that comes with it?
Unless you are using a formal proof language, you're going to have that problem anyway. It's always humorous when you read comments like these and you find out they are using Rust or something similar with a half-assed type system.
Lots of languages other than Rust have static types, some more complete than others.
Hardly. Suppose all caught errors in a particular module of code bubble up to a call site which (say) retries with exponential back-off. If the compiler can guarantee that I handle every error, I only need one test that checks whether the exponential back-off logic works. With no error handling guarantee, I'd need to test that every error case is correctly caught—otherwise my output might be corrupted.
No matter how you slice it you have to figure out what the software you are calling upon does and how it is intended to function. Which is, too, why you are writing tests: So that your users have documentation to learn that information from. That is what tests are for. That is what testing is all about! That it is also executable is merely to prove that what is documented is true.
All languages with exceptions enter the chat
Let us introduce you to the concept of checked exceptions. That is one of the few paradigms we've seen in actually-used languages (namely Java) where communicating which specific errors will occur has been tried.
Why is it that developer brains shut off as soon as they see the word "error"? It happens every time without fail.
Obviously mainstream statically typed languages can't formally verify all complex app behaviour. My frustration is more aimed at having time and energy wasted from runtime and test suite errors that can be easily caught with a basic type system with minimal effort e.g. null checks, function parameters are correct type.
Formal proof languages are a long way from being practical for regular apps, and require massive effort for diminishing returns, so we have to be practical to plug some of this gap with test cases and good enough type systems.
Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic. This isn't a real problem.
What you might be trying to suggest, though, is that half-assed type systems are easier to understand for average developers, so they are more likely to use them correctly and thus feel the benefit from that? It is true that in order to write good tests you need to share a formal proof-esq mindset, and thus they are nearly as burdensome to write as using a formal proof language. In practice, a lot of developers don't grasp that and end up writing tests that serve no purpose. That is a good point.
I just don't find this in practice. For example, I've worked in multiple large Python projects with lots of test cases, and nobody is making the effort to check what happens when you pass incorrect types, badly formed input, and null values in different permutations to each function because it's too much effort and tedious. Most tests are happy path tests, a few error handling tests if you're lucky, for a few example values that are going to miss a lot of edges.
And let's be honest, it's common for parts of the code to have no tests at all because the deadline was too tight or it's deemed not important.
If you have a type system that lets you capture properties like "this parameter should not be null", why would you not leverage this? It's so easily in the sweet spot for me of minimal effort, high reward e.g. eliminates null errors, makes refactoring easier later, that I don't want to use languages that expect me to write test cases for this.
> half-assed type systems are easier to understand for average developers
Not sure why you call them that. Language designers are always trying find a sweet spot with their type systems, in terms of how hard it is to use and what payback you get. For example, once you try to capture even basic properties about e.g. the size/length of collections in the types, the burden on the dev gets unreasonable high very quickly (like requiring devs to write proofs). It's a choice to make them less powerful.
This seems like a roundabout way of confirming that what you are actually saying is that half-assed type systems are much easier to grasp for average developers, and thus they find them to be beneficial because being able to grasp it means they are able to use it correctly. You are absolutely right that most tests that get written (if they get written!) in the real world are essentially useless. Good tests require a mindset much like formal proofs, which, like writing true formal proofs, is really hard. I did already agree that this was a good point.
> Not sure why you call them that.
Why not? It gets the idea across enough, while being sufficiently off brand that it gets those who aren't here for the right reasons panties in a knot. Look, you don't have to sell me on static typing, even where not complete. I understand the benefits and bask in those benefits in my own code. But they are also completely oversold by hyper-emotional people who can't discern between true technical merit and their arbitrary feelings. Using such a term reveals where one is coming from. Those interested in the technical merit couldn't care less about what you call it. If someone reacts to the term, you know they aren't here in good faith and everything they say can be ignored.
To clarify, I think formal verification languages are too advanced for almost everyone and overkill for almost every mainstream app. And type systems like we have in Rust, TypeScript and OCaml seem a reasonable effort/reward sweet spot for all levels of developer and most projects.
What's your ideal set up then? What type system complexity (or maybe language)? How extensive should the test suite be? What categories of errors should be left to the type system and which ones for the test suite?
Dynamic typing is on the other end of the spectrum. That is a huge pain precisely because there are no automated checks.
In between those two extremes there is an (subjective)sweet spot. Where you don't pay much at all in terms of overhead, but you get back a ton from the checks it provides.
That's not true. At no point in testing `fn add(a: i32, b: i32) -> i32` am I going to call `add("a", "b")` or `add(2, None)`. Rust even won't permit me to try. In a language with a more permissive type system, I would have to add additional tests to check cases where parameters are null or of the wrong type.
It seems you either don't understand the topic of discussion or don't understand testing (see previous comment). If the user of your function calls it in undocumented ways, that's their problem, not yours. That is for their tests to reason with.
Passing the wrong types is only your problem for the functions you call. Continuing with your example, consider that you accidentally wrote (where the compiler doesn't apply type checking):
fn double(a: i32) -> i32 {
add(a, None) // Should have been add(a, a)
}
How do you think you are going to miss that in your tests, exactly?If it's practical to get a static type system to exhaustively check a property for you (like null checks), it's reckless in my opinion to rely on a test suite for that.
> If the user of your function calls it in undocumented ways, that's their problem, not yours.
Sounds reckless to me as well because you should assume functions have bugs and will also be passed bad inputs. If a bug makes a function return a bad output, and that gets passed to another function in a way that gives "undocumented" behaviour, I'd much prefer to code to fail or not compile at all, because when this gets missed in tests it'll eventually trigger on production.
I view it like the Swiss cheese model (mentioned elsewhere), where you try to catch bugs at a type checking layer, a test suite layer, code review, manual QA, runtime monitoring etc. and you should assume flaws at all layers. I see no good reason to skip the type checking layer.
// apply special discount
newPaymentInfo = {value: oldPaymentInfo.value / 2}
newPaymentInfo[tax] = applyRegionalTax(newPaymentInfo.value)
return newPaymentInfo
which only gets parsed by another corner of the codebase at runtime. // apply tax to tips only in some regions
if (taxableTips) {
paymentInfo.tip += applyRegionalTax(paymentInfo.value)
// ERROR: tip is undefined (instead of zero)
}
You can't validate everything all the time, and if you try, it's easy for that validation to fall out of sync with the actual demands of the underlying logic. Errors like this crop up easily while refactoring. That's why one of the touted benefits of Rust's type system is "fearless refactoring."Snubbing type systems because they aren't 100% failproof misses that point.
It is true that it is impossible in general, but that says nothing about whether or not it is possible in almost every useful case
If we carve out "programs that run themselves on themselves and then do the opposite", what remains?
Just get good!
Your job as a programmer is to think through the domain logic, find the right abstractions to represent that logic, and then tell the computer to enforce those abstractions for you. Punting business logic to "it's all messy, type systems can't be used to model it" is just saying "I cant take the time to find the right abstractions, sorry"
Which is why I'm partial to the data-oriented approach that clojure seems to promote. If at its core, programming is just Data And Its Various Transformations, you can encode your data in simple structs or lists, and then your transformations encode the rules and logic. In this model you're still making illegal states unrepresentable, it's just being done in the base programming language instead of the type system. Having a type system that can verify that things aren't null, a string isn't a number etc is a benefit of course. But I don't see much difference in putting that logic in the base language or the higher order type system language, except the base language is more expressive and flexible.
I'm not sure if this is correct of course, I don't have enough experience to really be certain. But it does sound reasonable and some much smarter and more experienced developers seem to think so as well. But I'm open to having my mind changed.
Yes, this is 100% correct. Type systems are just another language
> The same propensity for human error exists there as well.
This is where I disagree. Sure you can make errors in types, but it's not the same because strong type systems are more like proof systems. They tell you when you've encoded something that doesnt make logical sense. And they help you figure out if your code can be made to adhere to them. It's a checker that you don't normally have.
> But I don't see much difference in putting that logic in the base language or the higher order type system language, except the base language is more expressive and flexible.
The type language encodes your assumptions, and the base language has to adhere to those assumptions. If your type language is expressive enough, you can encode pretty complex assumptions. There's value in having the two playing against each other.
Similar to tests: you could say tests are just re-stating what you've already written in your code. In reality, it's another check for consistency with your original intention.
forall s:Source, execute_machine_code(compile(s)) = interpret_source_code(s)
And the compiler will only accept your compiler code as being typed correctly if for all possible source code, running the compiled code gives the same result as interpreting the source code directly.
In other words, you are proving your compiler to be correct. Think about it as having an infinite number of test cases in a single line.
Now that's powerful!
Try writing some Haskell for a while.
Explore the compiler output. If you can ignore the language runtime it’s pretty reasonable. Good, even. There are a lot of cases where GHC can optimize away a lot of things you might think are unreasonable by reading the surface language.
Only the former should be represented by and constrained by the type system.
What is the evidence for that?
Admitting it's true, how expensive is it really?
regardless of the style, when business domain changes a lot the most expensive part in general are...tests. not apis, not types. But tests.
In order to even get a piece of system tracking the amount of boilerplate was enormous.
It won't help in delivering complex logic and might even be a costly mistake. E.g. if exploratory attempt costs 3 days of refactoring to implement and then it uncovers unexpected behavior those 3 days on type adjustment is just time lost.
Today seeing many complicated systems in both non-static typed and strongly typed languages I have opinion that's only matter of preference. Dragons live everywhere.
dang•6mo ago
The Big OOPs: Anatomy of a Thirty-Five Year Mistake - https://news.ycombinator.com/item?id=44612313 - July 2025 (181 comments)
The Big Oops: Anatomy of a Thirty-Five-Year Mistake [video] - https://news.ycombinator.com/item?id=44596554 - July 2025 (91 comments)