There's a folk story - I don't remember where I read it - about a genealogy database that made it impossible to e.g. have someone be both the father and the grandfather of the same person. Which worked well until they had to put in details about a person who had fathered a child with his own daughter - and was thus both the father and the grandfather of that child. (Sad as it might be, it is something that can, in fact, happen in reality, and unfortunately does).
While that was probably just database constraints of some sort which could easily be relaxed, and not strictly "unrepresentable" like in the example in the article - it is easy to paint yourself into a corner by making a possible state of the world, which your mental model dims impossible, unrepresentable.
The critical thing with state and constraints is knowing at what level the constraint should be. This is what trips up most people, especially when designing relational database schemas.
I think the solution to that is to continuously refactor, and to spell out very clearly what your assumptions are when you are writing the code (which is an excellent use for comments).
The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.
Hi, can you give an example? Not sure I understand what you're getting at there.
(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc.
All models are wrong, and that's inevitable/fine, as long as the model can be altered without pain. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").
- that's why OOP failed - side effects, software too liquid for its complexity
- that's why functional and generic programming are on their rise - good FP implementations are natively immutable, generic programming makes FP practical.
- that's why Kotlin and Rust are in position to purge Java and C, philosophically speaking - the only things that remain are technical concerns, such as JetBrains' IDEA lock-in (that's basically the only place where you can do proper Kotlin work) as well Rust's "hostility" to other bare-metal languages, embedded performance, and compiler speed.
I don't know how strong lock in is.
also, hot take: Kotlin simply does not need this many tools for refactoring, thanks in part to the first-class FP support. in fact, almost every non-Android Kotlin dev I have ever met would be totally happy with analysis and refactoring levels on par with Rust Analyzer.
but even with LSP, I would still need IDEA (at least Community) to perform Java -> Kotlin migration and smooth Java interoperability.
inheritance is what has been changing in scope, where it’s discouraged to base your inheritance structure on your design domain.
trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]
[0]: https://blog.jetbrains.com/kotlin/2015/05/kotlin-m12-is-out/...
2 indeed never made sense to me since once everything is ASM "protected" means nothing, and if you can get a pointer to the right offset you can read "passwords". This claim of enforcing what can and cannot be reached from a subclass to help security never made sense to me.
3 i never liked function overloading, prefer optional arguments with default values.. if you need a function to work with multiple types of one parameter, make it a template and constrain what types can be passed
7 interfaces are a must have for when you want to add tests to a bunch of code that has no tests.
8 rust macros do this, and it's a great way to add functionality to your types without much hassle
9 idk what this is
Actually, I have some similar concerns about powerful type systems in general -- not just OOP. Obsessing about expression and enforcement of invariants on the small scale can make it hard to make changes, and to make improvements on the large scale.
Instead of creating towers of abstraction, what can work better often is to try and structure things as a loosely coupled set of smaller components -- bungalows when possible. Interaction points should be limited. There is little point in building up abstraction to prevent every possible misuse, when dependencies are kept in check, so module 15 is only used by 11 and 24. The callers can easily be checked when making changes to 15.
But yeah -- I tend to agree with GP that immutability is a big one. Writing things once, and making copies to avoid ownership problems (deleting an object is mutation too), that prevents a lot of bugs. And there are so many more ways to realize things with immutable objects than people knew some time ago. The typical OOP codebase from the 90s and 00s is chock-full with unnecessary mutation.
Could you please expand upon your idea, particularly the idea that creating (from what I understood) a hierarchical structure of "blackboxes" (abstractions) is bad, and perhaps provide some examples? As far as I understand, the idea that you compose lower level bricks (e.g. classes or functions that encapsulate some lower level logic and data, whether it's technical details or business stuff) into higher level bricks, was what I was taught to be a fundamental idea in software development that helps manage complexity.
> structure things as a loosely coupled set of smaller components
Mind elaborating upon this as well, pretty please?
You'll notice yourself when you try to actually apply this idea in practice. But a possible analogy is: How many tall buildings are around your place, what was their cost, how groundbreaking are they? Chances are, most buildings around you are quite low. Low buildings have a higher overhead in space cost, so especially in denser cities, there is a force to make buildings with more levels.
But after some levels, there are diminishing returns from going even higher, compared to just creating an additional building of the same size. And overhead is increasing. Higher up levels are more costly to construct, and they require a better foundation. We can see that most higher buildings are quite boring: how to construct them is well-understood, there isn't much novelty. There just aren't that many types of buildings that have all these properties: 1) tall/many levels 2) low overall cost of creation and maintenance 3) practical 4) novel.
With software components it's similar. There are a couple of ideas that work well enough such that you can stack them on top of each other (say, CPU code on top of CPUs on top of silicon, userspace I/O on top of filesystems on top of hard drives, TCP sockets on top of network adapters...) which allows you to make things that are well enough understand and robust enough and it's really economical to scale out on top of them.
But also, there isn't much novelty in these abstractions. Don't underestimate the cost in creating a new CPU or a new OS, or new software components, and maintaining them!
When you create your own software abstractions, those just aren't going to be that useful, they are not going to be rock-solid and well tested. They aren't even going to be that stable -- soon a stakeholder might change requirements and you will have to change that component.
So, in software development, it's not like you come up with rock-solid abstractions and combine 5 of those to create something new that solves all your business needs and is understandable and maintainable. The opposite is the case. The general, pre-made things don't quite fit your specific problem. Their intention was not focused to a specific goal. The more of them you combine, the less the solution fits and the less understandable it is and the more junk it contains. Also, combining is not free. You have to add a _lot_ of glue to even make it barely work. The glue itself is a liability.
But OOP, as I take it, is exactly that idea. That you're creating lots of perfect objects with a clear and defined purpose, and a perfect implementation. And you combine them to implement the functional requirements, even though each individual component knows only a small part of them, and is ideally reusable in your next project!
And this idea doesn't work out in practice. When trying to do it that way, we only pretend to abstract, we just pretend to reuse, and in the process we add a lot of unnecessary junk (each object/class has a tendency to be individually perfected and to be extended, often for imaginary requirements). And we add lots of glue and adapters, so the objects can even work together. All this junk makes everything harder and more costly to create.
> structure things as a loosely coupled set of smaller components
Don't build on top of shoddy abstractions. Understand what you _have_ to depend on, and understand the limitations of that. Build as "flat" as possible i.e. don't depend on things you don't understand.
I also think it's about how many people you can get to buy-in on an abstraction. There probably are better ways of doing things than the unix-y way of having an OS, but so much stuff is built with the assumption of a unix-y interface that we just stick with it.
Like why can't I just write a string of text at offset 0x4100000 on my SSD? You could but a file abstraction is a more manageable way of doing it. But there are other manageable ways of doing it right? Why can't I just access my SSD contents like it's one big database? That would work too right? Yeah but we already have the file abstraction.
>But OOP, as I take it, is exactly that idea. That you're creating lots of perfect objects with a clear and defined purpose, and a perfect implementation. And you combine them to implement the functional requirements, even though each individual component knows only a small part of them, and is ideally reusable in your next project!
I think OOP makes sense when you constrain it to a single software component with well defined inputs and outputs. Like I'm sure many GoF-type patterns were used in implementing many STL components in C++. But you don't need to care about what patterns were used to implement anything in <algorithm> or <vector>. you just use these as components to build a larger component. When you don't have well defined components that just plug and play over the same software bus, no matter how good you are in design patterns it's gonna eventually turn into spagetti un-understandable mess.
I'm really liking your writing style by the way, do you have a blog or something?
Neither Kotlin nor Rust cares about effects.
Switching to Kotlin/Rust for FP reasons (and then relying on programmer discipline to track effects) is like switching to C++ for RAII reasons.
Kotlin and Rust are just a lot more practical than, say, Clojure or Haskell, but they both take lessons from those languages.
Right. Just like Strings aren't inherently bad. But languages where you can't tell whether data is a String or not are bad.
No language forbids effects, but some languages track effects.
When you actually design interfaces you discover that there are way more states to keep in mind when implementing asynchronous loading.
1. There’s an initial state, where fetching has not happened yet
2. There may be initial cached (stale or not) data
3. Once loaded the data could be revalidated / refreshed
So the assumption that you either are loading XOR have data XOR have an error does not hold. You could have data, an error from the last revalidation, and be loading (revalidating).
This will be discovered at compile time if you use immutability & types like the article suggests.
> So the assumption that you either are loading XOR have data XOR have an error does not hold.
If you design render() to take actual Userdata, not LoadingUserdata, then you simply cannot call render if it's not loaded.
The way to produce unspecified behaviour is to enable nulls and mutability. Userdata{..} changing from unloaded to loaded means your render() is now dynamically typed and does indeed hit way more states than anticipated.
The idea that the data model looks like
enum XYZ {
case A(B, C, D, E);
case F(G, H, I, J, K);
case L();
case M(N, O);
}
is just not true in practice.I think messaging is one case where it can happen, but even there it's often good to combine fields and share them (and common code) over multiple types of messages. If Messages A and B both have a field "creationTime" or whatever (with identical semantics), it's probably a bad idea to model them as separate fields, because that leads to code duplication, which is unmaintainable.
Maybe I can be more precise by proclaiming that ADTs can be good to be clear what's "there", so they can be used to "send" information. But to write any useful usage code, typically a different representation that folds common things into common places is needed. And it might just happen that field F is valid in cases A and B but not C. That's life! Reality is not a tree but a graph.
That's why it's a bad idea to try and model the exact set of possible states and rule out everything else completely in the type system. Even the most complicated type systems can only deal with the simple cases, and will fail and make a horrible mess at the more complicated ones.
So I'm saying, there is value to preventing some misuse and preventing invalid states, but it comes at a cost that one has to understand. As so often, it's all about moderation. One should avoid fancy type system things in general because those create dependency chains throughout the codebase. Data, and consequently data types (including function signatures) is visible at the intersection of modules, so that's why it's so easy to create unmaintainable messes by relying on type systems too much. When it's possible to make a usable API with simple types (almost always), that's what you should do.
Now, is that gonna help design better software in any particular way?
What does it mean when loading is false, error is Some, and data is also Some? The type allows nonsense. You write defensive checks everywhere. Every render must guess which combination is real
How about you never write the wrong state in the first place ? Then you have nothing to take care ofIndeed, and tagged unions (enums in Rust) explicitly allow you to avoid creating invalid state.
> Stability isn’t declared; it emerges from the sum of small, consistent forces.
> These aren’t academic exercises — they’re physics that prevent the impossible.
> You don’t defend against the impossible. You design a world where the impossible has no syntax.
> They don’t restrain motion; they guide it.
I don't just ignore this article. I flag it.
Edit: thank you for the answers, I don't know how I missed that em dash clue.
(oh no am i one of them?)
Yet in practice, I've found that it's easy to over-model things that just don't matter. The number of potential states can balloon until you have very complex types to handle states that you know won't really occur.
And besides, even a well modelled set of types isn't going to save you from a number that's out of the expected range, for example.
I think Golang is a good counter example here -- it's highly productive, makes it easy to write very reliable code, yet makes it near impossible to do what the author is suggesting.
Properly considered error handling (which Go encourages) is much more important.
apples_oranges•6h ago