The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
https://en.wikipedia.org/wiki/American_Society_for_Cyberneti...
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
If you have need for speed, a team that knows the space, and crucially a leader who can be trusted to depart from the usual process when that tradeoff better meets business needs, it can work really well. But also comes with increased risk.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.
I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.
The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.
This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.
If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).
But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”
There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.
voidhorse•47m ago
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
vslira•43m ago
voidhorse•13m ago