For example I had the misfortune to needing help from the mental health community a decade ago. I sometimes think about how much easier that part of my life would have been if TikTok and Claude had existed back then. It’s easy to forget, but many of the institutions we’ve built over centuries are deeply dysfunctional and very much in need of disruption.
The night often seems darkest just before dawn.
(With that said, I agree there are risks of the kind you describe.)
Another way of thinking about, a non-trivial amount of training data is internet comments and blogs, which have an alarming amount of self diagnoses, non-professional diagnoses, and totally fabricated facts about mental illness.
Since I wasn't asking anything related to pH, I skimmed past that section and didn't notice the error until much later in the chat when the LLM decided to build upon erroneous reasoning.
I think someone who hadn't studied chemistry would've relied on its' answer since all the rest of the logic would've been correct if the solution really did become more acidic.
Especially using TikTok to try and improve mental health issues seems a bit like trying to fight fire with a (edit: spelling) hose of jet fuel.
I imagine it will rely a lot on the pilot, and how well they understand those limitations. Perhaps the bigger risks are those without good understanding of LLMs who just treat it like an all knowing expert human.
Its obvious there's no intelligence behind it when you make it try to do something with data that gets parsed by a computer.
And therapists don't? All the data used in diagnosis is self-reported feelings. All the progress made from seeing a therapist is also self-reported feelings.
Random chance might make more accurate diagnoses than any based on self-reported feelings.
That doesn't mean I don't think LLMs couldn't be used for therapy with great success. Just not general purpose models without a lot of tweaking.
Have you seen One Flew Over the Cuckoo's Nest? I think it serves as a good model to explain why we can both be right. You’re looking at the average effect of the institution across every minor character that appears in frame and deem it positive, and it was. I’m looking at what happened to Jack Nicholson and see the institution as deeply dysfunctional. If you think about it I think you’ll agree that I’m not wrong about that.
But what the author isn't clearly saying is that he is describing a free market opposed to a planned economy, even if he doesn't specificly say that, that's the implication of the final question.
And I am all for planned economy. I want to cooperate with my human fellows, not compete to crush them. In this day and age it is obsurd not to this. The technology is there, we just need to use it the right way. I know many will do a knee-jerk reaction to this statement and start spitting out the old anti-soviet economy arguments about planned economy, I'm bracing.
If you want competition then accept the social outcomes. Wanting to equalize competition through regulation is exactly like banning war crimes. When war is raging all kinds of war crimes will be commited.
We don't have to copy the Soviets exactly, but it's foolish to pretend like their system didn't work. Sure they killed and imprisioned people, but we do the same on at least the same scale. Socialism at least in principle isn't specifically aiming for that outcome. Whatever we're doing here kinda does. Our country would crumble if the war machine ever stopped. Changing our approach to anything is just too expensive to consider and dissent can't be tolerated for the same reason. What options do we really have?
Surely we can do better
Your complaint is actually about humans.
As long as we have humans we will have people that seek to cultivate and maintain power at all costs.
It's just AI generated, which is a shame to end up at the top of the site.
I worked in the defense industry, specifically on weapons meant for export. At the time, I was considered a subject matter expert. Something that eventually got me selected to travel to a foreign nation that had purchased an update to that weapon we sold them.
This country had an active internal conflict at the time, and the soldiers I was training had used the previous iteration of that weapons system. Prior to flying out my peers had forewarned me about was that these young men loved to try to horrify us soft westerners. I was no exception of course. For me, when we were relaxing over dinner, one of the soldiers felt it was my turn. Showed me photographs of a family of corpses corpses. 5 adults. 4 children, the youngest I guess being an infant and the oldest I think was probably no older then 8 or 9 years old. A multi generational household, all civilian, all unarmed. And here he was telling me in unabashed gruesome detail about how had murdered them during a patrol, and speaking as if he had brought justice to an ancient crime. Using that older iteration of weapons I had helped create.
But I had my marching orders. No actions or words that could potentially lead to an incident. Whatever I felt got buried by black humor and a few tips on how they could do it more efficiently in the future. Later on I heard my callousness had earned modicum of respect.
Knowing what I just told you know, would you feel comfortable in your planned economy if you were ordered to join my team and work on improving that same weapon to the best of your ability? Would you say that it is the right way to use the technology we are building? Knowing who it was going to go to?
It doesn't seem to matter whether an individual works for the weapons manufacturers, they're going to build them with my tax dollars and sell them to monsters whether i like it or not
If they want to have more impact, they need to adapt to more market-like techniques.
Whether an institution moving faster would be good or bad, I am not sure actually. Probably moving too fast for something that belongs to "everyone", with all kinds of heated opinions, etc. is not the best place to move fast.
OTOH, and this is as a spanish (I do not know enough about the specifics of America), I feel that nowadays part of the insritutions are "injected" changes that the society is not demanding from them. Destroying the established base and traditions of the (in this case) society. Social engineering and influencing campaigns I would say.
Maybe this is not the main topic of the article. Just was a brain dump I was doing about observations of my own.
These things have always existed actually. It is just that with technology and many people using it, information from individuals, etc. this influence is probably made more effectively.
This is decidedly untrue. You can make a few clever(er) points here: VC meddling is rotting at the core of ingenuity (which I happen to agree with) or even that large companies (GOOG/MSFT/etc.) are tangentially capturing startups via incentives (think free credits, etc.). But author doesn't make these, so I won't argue against (or for) them.
> Today, higher-leverage actors are strip-mining institutional commons—democratic norms, social trust, educational relevance, economic mobility—faster than lower-leverage actors can regenerate them.
This will seem like a technicality, but for a pedant like myself, it's quite important: this is absolutely not the tragedy of the commons. It might be a new tragedy (maybe we'll call it "theft?"), but the interesting and paradoxical nature of the original has nothing to do with this reformulation. What makes the tragedy of the commons interesting is that all agents will favor maximizing their local maxima in spite of minimizing the global maxima. A sort of "missing the forest for the trees" thing.
> The stakes couldn't be higher. If we don't develop conscious approaches to managing leverage arbitrage, we risk a future where technological capability advances exponentially while social coordination capacity deteriorates linearly—a recipe for civilizational breakdown.
This seems a tad dramatic; "leverage arbitrage" was significantly higher during colonial or industrial times and society didn't exactly collapse. I agree with the sentiment, but not sure about all the "боже мой."
It's a common pitfall to take yourself out of the equation when you are about to make assertions without evidence to back them up. The title could have been written as:
"Why do I feel everything is broken, and what can I do about it".
you will end up writing a completely different post as a result. a more sincere one, and one that is closer to the reality of the situation that needs to be addressed, and that is one's perceptions, biases, and feelings that come from our own personal experiences.
Sometimes collective action is required to fix an issue. Restricting people to individual action is depriving them of their freedom of association, a core enabler of democracy.
If you convince many powerless people to work alone, they aren’t gonna be a threat.
what halayli wrote is spot on in this context. If you're writing a blog post and call it a day, then you're not trying to change the status quo. If he's giving talks, seeking out interviews etc to address this issue, you'd have a basis to argue your points...
But like this? No, it doesn't hold up.
It's super normalized however, and not specific to this blog. Most blogs seem to call on the reader to do things. And even if every reader did, you still wouldn't have changed the status quo, because that needs a way bigger investment.
I think it is important from time to time to exit the world of individual responsibility ("What can I do to reach the moon") and enter the world of collecrive organization ("What can we do ro reach the moon?"). You probably understand why.
It is strange finding this comment at the top given its fundamental misunderstanding of TFA.
And the articles from 2025 onward all use em-dashes.
Curious, huh?
> Each operates at a different mathematical order: linear, exponential, and systematic respectively.
This was where I stop reading. "Systematic" mathematical order. Sure.
Personally I do care about punctuation, it's part of the (my) style and presentation, the way to convey meaning, etc. Still a reference to a classic: https://news.ycombinator.com/item?id=7865024
But the fact that blog trying to communicate an interesting idea is using a predictive machine to word things is just so depressing. Is it your blog, or is it chatgpts? Did you come up with the term leverage arbitrage, or did a predictive model?
Did OP just provide an outline and let the predictive model write the entire thing? It just feels sad.
It's not X -- It's Y
On the end paragraph, it became very obvious that a lot of this was AI generated. Please, speak in your own voice! The message of this article is almost completely ruined by the use of ChatGPT as a writing crutch.
This is why lot of people within tech/science circles feel lost and defensive about their work. They barely understand anything about the humanities/social sciences.
Consciousness of what is missing is increasing slowly, thanks to the info tsunami the internet has unleashed.
But that info delivery architecture relies on pseudo experts and celebs whose survival depends on collecting views, and is delivered to the mind in such random order, with high levels of over stimulation and noise, that it creates even more confusion.
What's missing?
No foundations in Philosophy. No idea where Value Systems come from. No idea how they are maintained - learn - adapt to change. No idea why all religious systems train their priests in some form of "Pastoral Care" involving constant contact with ordinary people and their suffering.
So the Vatican survives the fall of nations/empires/plagues/economic downturns/reformation/enlightenment/pedo scandals etc but science/engineering orgs look totally helpless reacting to systemic shocks and go running to Legal/HR/PR people for help.
That's at the org level. At the individual level, most tech folk pretend the limitations/divisions of their own brain/mind don't exist and have no impact on what they build. There is no awareness of what Plato/Hume/Freud/Kahneman have to say about it and how those divisions of the non united mind and denial of it effect what gets built. And since the article mentions systems running at different speed think about the electrical and chemical signaling in your own mind. Are they happening at the same speed?
So don't try to work all this out by yourself. Multi-disciplinary groups are our only hope. If the org is filled with only engineers, history already shows us how the story unfolds.
"When everything's made to be broken, I just want you to know who I am" (c) Iris by Goo Goo Dolls
I feel it touches on something deep that has to do with the current state of the tech world.
I agree that everything feels broken. I'd like to do something about it. Let me deploy my leverage to work on that. Here's my "labor leverage", right here, this comment. Check. Leverage strength... not much. Let's bring my "capital leverage" to bear... okay, done, my 401(k) is invested in my favorite companies. Did you notice? No? Okay, leverage strength... let's go with epsilon^2. And my "code leverage"... uh... I don't think I have any.
So, wait, I, personally, don't have any third-order leverage at all? How am I supposed to go up against trillion-dollar, billion-node networks, with my epsilon^3 "leverage"?
That's the real problem: I don't actually have any meaningful leverage. I'm not in the game.
Is that actually true? Doesn't matter: I believe it to be true. Sure looks like everything is broken to me.
lol, lmao even. I don't even think this would work if there wasn't capitalism in play.
> Most importantly, we need to redesign how we measure success and allocate resources.
Oh, they Are suggesting we dismantle capitalism.
Even if we were to dismantle capitalism in America, TikTok is a Chinese business/product. So this has to be a global solution. While we're wielding our lever that can move the globe let's fix global warming, war, hunger, etc while we're at it. No point in half measures.
Does feel good to read though, nice and fluffy.
The question then becomes, how do we increase the velocity of social institutions to keep pace with technology? Balaji’s blockchain native societies come to mind. The comment in the thread about needing philosophical roots in engineering is interesting too. Curious what you think.
Our institutions were more stable before because people were more dedicated to maintaining them. They were created because overt pain or death was the alternative. But then they began to protect us from discomfort. Then from fear of discomfort. And for discomfort that exists in the mind. This was bound to come apart around the time some people asked to be afraid simply said “meh”
Will this go badly? Certainly from the perspective of whether people feel safe and content. But we have had 10,000 years of self generated strive from humans. It is our fuel.
The framework begins with a simple observation...: there are three types of leverage.
I get where he's coming from, but "leverage" (AKA competitive advantage in a market?) seems like a woefully inadequate framework with which to understand the overall direction of society. Regardless: What's a "systematic" growth rate...? I'm obviously a math noob, but I've never heard of that, and don't see it mentioned on wikipedia: https://en.wikipedia.org/wiki/Growth_rate_%28group_theory%29In general this seems like a good summary of the most Marxist dichotomy of them all-labor vs. capital-with "code" tacked on to the end even though it doesn't really fit. I've cut a lot of responses to the examples below, but I think it's easy to pick out the ones that don't fit once you start looking -- I leave it as an exercise for the reader ;)
Today, higher-leverage actors are strip-mining institutional commons—democratic norms, social trust, educational relevance, economic mobility—faster than lower-leverage actors can regenerate them.
Another way to say this would be[1]: Constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation distinguish the bourgeois epoch from all earlier ones. All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned... The leverage arbitrage creates a vicious cycle. As the gap between different leverage types widens, traditional coordination mechanisms become increasingly ineffective. This drives more actors toward higher-leverage approaches, accelerating the divergence. Meanwhile, the institutional commons that make civilization possible—shared truth, democratic discourse, economic mobility, social cohesion—continue degrading because they depend on lower-leverage maintenance that can't compete with higher-leverage extraction.
This seems to be the main thesis, which I'd rephrase as: "Accumulation of wealth at one pole is, therefore, at the same time accumulation of misery, agony of toil, slavery, ignorance, brutality, mental degradation, at the opposite pole, i.e., on the side of the class that produces its own product in the form of capital."[2] But understanding this dynamic also suggests solutions. Instead of trying to slow down technological change or somehow make institutions faster, we need "leverage literacy"—helping people recognize the type of power they're actually wielding and use it more consciously.
And this is why I felt compelled to write a long response in the first place. The solution to "the capitalists are controlling everything and I don't like the results" is to band together as workers and exert control over the situation! The stereotype of libertarians as the kind of people to offer the destitute "financial literacy" classes instead of help is a trite one, but clearly not obsolete... Most importantly, we need to redesign how we measure success and allocate resources. Current systems reward short-term optimization within single leverage types while ignoring long-term effects across leverage types. The result is systematic underinvestment in the institutional commons that higher-leverage systems depend on to function sustainably.
Yes, I agree -- we need to greatly reduce/eliminate the power that shareholders have to prioritize YoY equity growth over the needs of society! Another way to say this would be "As capitalist, he is only capital personified. His soul is the soul of capital. But capital has one single life impulse, the tendency to create value and surplus-value, to make its constant factor, the means of production, absorb the greatest possible amount of surplus-labour. Capital is dead labour, that, vampire-like, only lives by sucking living labour, and lives the more, the more labour it sucks.
"[3]Or, in the words of noted anti-Capitalist Sam Altman: "We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."[4]
The choice isn't between technology and tradition—it's between conscious coordination across leverage levels and unconscious optimization within leverage silos.
This really sums our disagreement. He thinks the solution to the problems he details is to coordinate more closely with the capitalists -- I feel confident that it's to disempower them. To put it in libertarian terms: their incentives are not a good fit for this situation![1] The Communist Manifesto: https://www.marxists.org/archive/marx/works/1848/communist-m...
[2] Das Kapital, Chapter 25: https://www.marxists.org/archive/marx/works/1867-c1/ch25.htm
[3] Das Kapital, Chapter 10: https://www.marxists.org/archive/marx/works/1867-c1/ch10.htm
[4] The OpenAI Charter, April 2018: https://web.archive.org/web/20230714043611/https://openai.co...
Lose that, and you'll be stuck in a stagnated first past the post world.
Does that mean all new is good and old is bad? No. And 'hypernovelty' has huge problems as it leaves no time for individuals nor society to adapt. But tread carefull with what you whish for.
nosefurhairdo•5h ago
Or the idea that democracy can't adapt to social media discourse; not everyone is chronically online. Politicians still respond to public sentiment to similar degree as they always have.
Then there's this:
> AI systems aren't just tools—they're deployed faster than we can develop frameworks for understanding their social implications.
If they aren't just tools, what are they? Why do we need a framework for understanding their social implications?
Post feels like a fever dream of someone who fell asleep to the Navalmanack audiobook.
bluetomcat•5h ago
RajT88•4h ago
Perepiska•3h ago
thrawa8387336•5h ago
motorest•4h ago
To me, it reads like a desperate far-fetched argument to deny employees a fair compensation for their work. As if there is any virtue in stiffing people out of their paycheck.
jrflowers•4h ago
trinsic2•4h ago
rwmj•56m ago
jrflowers•4h ago
I would guess that the argument for wanting a framework for understanding language models’ social implications would be the social implications of language models. Like a phenomenon existing is a valid argument for understanding it.
https://www.theatlantic.com/technology/archive/2025/07/chatg...
https://www.rollingstone.com/culture/culture-features/ai-spi...
https://www.psychologytoday.com/us/blog/urban-survival/20250...
>not everyone is chronically online
Do you have data to support this? What does “chronically” mean and why would its absence invalidate the idea of social media impacting how people act and vote?
la64710•4h ago
sien•4h ago
From :
https://en.wikipedia.org/wiki/Unicorn_(finance)#History
So not only is there no data to back up the authors claim but it is wrong and could be checked by looking at wikipedia.
You don't have to be right, you can make claims, but when you make grand claims you should at least check wikipedia.
nothrabannosir•4h ago
Counting unicorns only serves to bolster that point : those are large VC fueled ships which operate on a completely different level. Because it is clear at founding time which type of company they are , it’s reasonable to include that as a qualifier.
Now if the data showed more small-growth companies were started , that’d be a stronger counter argument.
cluckindan•4h ago
https://news.crunchbase.com/startups/google-stanford-and-the...
jrflowers•2h ago
In fact since the number of billion dollar valuations goes up by an order of magnitude every few years we are on track for every person on earth to start a billion dollar corporation in a few short decades. This is proof that you could see on Wikipedia that entrepreneurship is doing great
bbor•3h ago
1. This post seems likely to have been written with the assistance of a chatbot, which explains this phrase. The "they aren't just tools" phrase is only there because he needed a third example for a "it's not just X--it's Y" paragraph, one of ChatGPT's all-time fave constructions.
2. Another cause is more fundamental: the third "leverage" doesn't really apply to this discussion IMO. It's probably useful elsewhere, but owning a large social media company is just a different variety of capital, not some mutually-exclusive group. All expressions of capitalist power in 2025 have some amount of technological amplification going on.
reissbaker•2h ago
vineyardmike•2h ago
Yes Google’s high salaries are part of a system that has been reworking entrepreneurship in Silicon Valley. This has been documented and discussed at length. Did you look for data?
Big tech pays in valuable stock, and salaries can reach upwards of 500k for relative rank and file positions (not rare one offs). Over a decade, that’s $5M. At the same time, VC firms have been holding companies private longer raising more rounds, which often dilutes the employee shares and reduces the “reward” for employees waiting for an IPO. If that new diluted IPO rewards an employee under $5M for a decade of employment, they were better off at Google/Meta/etc. Startups were always a lottery ticket, but if a “winning” ticket is less profitable than not playing, why join at all?
This plays directly into the thesis that the powerful are extracting additional resources at the expense of the cultural expectations and understandings. VC firms diluting employees is profitable for VCs, but it jeopardizes the Silicon Valley startup ecosystem if smart people prefer better compensation. Same with the recent AI aqui-hire controversies like Windsurf. Why join a startup if the CEO will take a billion dollar payout and leave the employees with worthless stock.