> Purpose-driven institutions [...] empower individuals to take intellectual risks and challenge the status quo.
(which of course includes and is most-often the institution itself.)
Almost the defining problem of modern institutions is sweeping problems under the rug, be it be climate change or (most important) Habermas's "Legitimation Crisis" [1] It's something I've been watching happen at my Uni ever since I've had anything to do with it. The spectacle of institutions failing to defend themselves [2] turns people against them.
Insofar as any external threat topples an institution or even threatens it seriously there was a failure of homeostasis and boundaries from the very building.
[1] https://en.wikipedia.org/wiki/Legitimation_Crisis_(book)
[2] ... the king is still on the throne, the pound is still worth a pound ...
Every institution (let's say - my household) sweeps problems under the rug. It's the euphemism for problems that aren't worth dealing with.
Institutions (in the form discussed) are either reinvented from the inside out and thus are and remain institutions, or are "toppled" in which case they are not "institutions" but "failures".
Think of how the Tea Party and the Libertarian movements or affected Republican politics in years past, or how completely alien the party is compared to a decade ago. The institution of the "Republican party" persists, even though it's nothing like its former self.
Same name, same "it's all fine just keep trusting us" but meanwhile quietly burned to the ground from the inside out.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(whatever else the centre stands for in any institution, one needs to see the dividing line _clearly_. What is the dividing line that defines the R-C in HN? Usually has something to do with things that feel like they cannot be said, yes!)
Of course details matter*, but how else can we pull the rabbits out from under the rug?
*Such as virtuoso listening skills, a functional beta-blocker or quietapine regime, etc.
lagniappe: https://www.youtube.com/watch?v=CcJ9QDN_-cw
"Decoupling by coupling" is my name for this confusion :)
Concrete to-do is to dissect and mount some YouTube transcripts for ah, what I'd call "offspring activation"
On the other axis, there is mention of lyingflat so that's where I've been nudging the trolls :)
For both I'll have to read Hiroyuki Nishimura. May take a month! (Both the original and the Chinese version)
"Things that cannot be said" as per Orwell are indeed part of the troll-sploration, shall we say, common, objectives.. ?
L: https://youtu.be/HBURlcNpfoo
PS: Buxton 100 has my vote :)
https://web.archive.org/web/20220304082837/https://books.goo...
Panties!! Panties!! Panties!!
(As capital)
For a cheap summary
(Book and person seems both more "oddball" and less "oddball" than this article makes it out to be:
https://youtube.com/shorts/m56cYM78wV4
https://youtube.com/shorts/QsG1WYq_1h8
Who is more hipster XD. Tough question
Who? Who are the other candidates, to be compared with Nishimura-san?
(or do the vids answer that? will watch them soon)
Poole+friends (in the 2nd vid-- note Hiroyuki never says "-san" but translator does)
Btw B4 I forget: PG got hold of the redditors, middle-middle class to middle-middle class (culturally speaking)
So is the basic issue (in Linebargian terms), how to offer decency, goodness, security, prosperity, authority, liberty under law to people who are searching for glamor, terror, inspiration, and romance instead?
(please let me know if you run across any digitisation of Lewis, "Hitler", 1930 ... https://www.monmouth.edu/department-of-english/documents/wyn...
...when Lewis states, "the German Nation has the chance at present of voting for its future tyrant," he is really saying "at least Hitler is not you, Mr. Tyrannical Democrat." ... by supporting a tyrant who menaces authority figures in his own country, he demonstrates that he won't submit to them. ... the puerile fantasy of Oedipal mastery is underscored by an ironic posture that, on this occasion, was ill-chosen.)
Channel Gen-Z's latent hipsterism (isolation from upper-middle-intellectuallism) into (gender neutral) "bernie-broism". If that makes sense
https://en.wikipedia.org/wiki/Bernie_Bro#:~:text=According%2...
(This assumes that those who run trade schools are able to peddle your Linebargerian goods unbundled from asinine* baggage)
*neuroses signified by The party mascot
PS: HN front-page comments from my (otherwise uninformed) perspective: partition into US millennials-to-boomers / european+Oceania all ages / genz-to-late-millennial other regions (country of birth, not domicile)
Heh, just a note on https://news.ycombinator.com/item?id=46541796 : imagine a therapist believing that the proper route is to help someone cognitively figure it out by asking the right questions! Almost as difficult as imagining a dev who believes the proper route is to write a script that does it... la déformation professionelle?
Wrt: "once you know the right 20-30 people or their friends or colleagues" it makes me wonder if a good corruption coefficient might be "how many zeros does it take[0] before you stop seeing impartial institutions and start seeing corruption". I believe those 20-30 people exist in almost all countries, it's just in some lucky countries you don't notice them at USD 6-7 zeros, but do start noticing them[1] at 9-10 — and in some other countries, you even have the 20-30 people in the quarter who are corruptible for 2-3 zeros.
[0] or maybe an even better index would be calibrated to wealth distribution within the jurisdiction?
[1] ask anyone investing in US offshore wind about US institutional impartiality these days!
Q. Should I read a translation of "Osudy dobrého vojáka Švejka za světové války" sometime?
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
Wrt the rest: a few more cycles of reflections needed..
(There's public evidence here that 9-10s won't unearth more than the 20 that any Joe knows about. The other 10 spurious ones depend on the situation and mission. I expect the situation in CH is almost the same, via a different design)
Compare with HN :)
Point the user^W dev towards better "tools of thought" that decouple problem-refinements from solutions.
TLA+, proof assistants, AIDEs, etc
Outside HN, I've been mostly refining my examples "by hand"..
(Hope you guys made it to the CNSOER prequals but if not, I suspect you already know the 10 guys that you need to call :)
The link to download the paper is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623
Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.
Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece.
Either way, it's an effort, and at least the authors will learn to not to do.
It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?
If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."
This may sound simple, obvious even, but it also happens to be much easier said than done.
That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
> If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein;
So, assume cognitive bias and a penchant for hyperbole.
> LLM technology is the most important, arguably the only thing, to have happened in philosophy
Why would "LLM technology" be important to philosophy?
> arguably the only thing, to have happened in philosophy
Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
> indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities.
What could this even mean?
Linguistics would appear at least one other of the applicable humanities to large language models.
Wittgenstein was famously critical of Turing's claim that a machine can think to the extent he claimed it caused Turing to create misunderstandings even in his mathematics.
Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I am struggling to see how this all adds up to being the "only viable framework for comprehending AI".
> If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
This is a wild ride.
So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
Reads like: "Sure, I broke your window and robbed your store, but you should be thanking me and encouraging me to break more windows and rob more people because I illuminated that glass is susceptible to breaking when a rock is thrown at it. Oh, your shit? I'm keeping it. You're welcome."
> Why would "LLM technology" be important to philosophy?
Well, because it has empirically proved that Wittgenstein was more or less right all along, and linguists like Chomsky (I would go as far as saying Kripke, too, but that's a different story) were ultimately wrong! To put it simply: in order to learn language, and by extension, compute arbitrary discourses, you don't need to ever learn definitions of words. All you need is demonstrations of language use. The same goes for syntax, grammar, and a bunch of other things linguists were obsessing about for decades, like modality. (But that's a different story altogether!) Computer science people call this the bitter lesson, but that is only a statement on predictive power, not emergent power. If it only ever were the case for learning existing discourses, that wouldn't be remotely as surprising. Computing arbitrary discourses is a much stronger proposition!
> Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
LLM's were a bit of a shock, and a lot of people are not receptive to this idea that Wittgensteinians won, basically, game over. There will be more flailing, but ultimately they will adapt. You can already see this with Askell and other traditionally-trained philosophy people adopting language games, it's only that they call it alignment. Neither a coincidence she went to Cambridge. It will take a bit of time for "academic philosophy" to recognise this, but eventually they will, because why wouldn't they?
Game over.
> Linguistics would appear at least one other of the applicable humanities to large language models.
Yeah, not really. All the interesting stuff that is happening has very little to do with linguistics. There's prefill from grammar, but it would be a stretch to attribute it to linguistics. In linguistic literature, word2vec was big time for the time being, but they did fuck-all with it ever since. I'm not trying to be hyperbolic here, either.
> Wittgenstein was famously critical of Turing's claim that a machine can think
I never understood this line of reasoning. So what Witt. and Turing had disagreements at the time? Witt. never had a chance to see LLM's, or anything remotely like it. This was unexpected result, you know? We could have guessed that it would be the case, but there were no evidence. We still don't have a solid theory to go from Frege to something like modern LLM's, and we may never will, but the evidence is there—Wittgenstein was right about you need for language to work.
> Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I don't see what this has anything to do with?
> So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
I never said AI "exploits" anything. I only ever said that being able to compute arbitrary discourses opens so many more doors than what's a pigeonhole insinuation like that would entail. What wasn't obvious before, is becoming obvious now. (This is why all these people are coming out with "revelations" on how AI is destroying institutions.) And it's not because of material circumstance. Just that some magic was dispelled, so stuff became obvious, and this is philosophy at work.
This is real philosophy at hand, not some academic wanking :-)
> Game over.
Is a perfect example. What "game" is "over"?
Chomsky's philosophical linguistics have long been derided and stripped for parts, and he was friends with Epstein and his cohorts so he can fuck right on off to disgrace and obscurity, but his goals within linguistics, as I understand them, were to identify why humanity has its faculty of language.
Wittgenstein was uninterested in answering the same question, and large language models are about as far from an answer to that question as one can get.
So, again, I am unsure what has been settled to the point of decrying "Game over".
Does this game only have two "teams"? One possible "outcome"?
Who's on what side of the "game"?
What have they said that shows their allegiance to one idea, and what have they said in opposition to the other?
What about large language models either support or contradict, respectively, said ideas?
As a huge fan of the ideas and writings of Wittgenstein I find it hard to believe that there are contemporary 'philosophers' who disagree with his ideas, namely that words take on meaning through context, but there are certainly trolls and conservatives in every field.
Disgruntled doesn't mean inaccurate.
EDIT: citing some resources here for those that are curious.
Original article cited by the paper: https://www.engadget.com/ai/fda-employees-say-the-agencys-el...
Actual CNN article the Engadget article is based on: https://www.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-reg...
Neither present a serious take about what's going on with Elsa at the FDA. They both glom onto a handful of anonymous sources without looking deeper.
A far more serious take about Elsa's use at the FDA in a journal that prides itself on scientific rigor and ethical behavior: https://publichealthpolicyjournal.com/elsa-llm-at-the-fda-a-...
Hard to take seriously with so many misspellings and duplicate punctuation.
I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains.
Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective.
Very bad writing, too, with unnecessarily complicated constructions and big words seemingly used without a proper understanding of what they mean (machinations, affordances).
If you want to further freak yourself out about probability look up Bertrand's Paradox and The Problem of Priors.
Yes Social Science is less accurate than what people call hard science, but the edges of scientific systems should concern yourself of validity of even hard science. Its pragmatically useful, yes, but metaphysical Truth? No.
So, web programmers could be going against AI on the grounds of self-preservation and be wholly justified in doing so, but lawyers are entitled to go after AI on more fundamental, irreconcilable differences. AI becomes a passive 'l'estat, cest moi' thing locking in whatever it's arrived at as a local maximum and refusing to introspect. This is antithetical to law.
But day to day, they spend a lot of their time selling boiler plate contracts and wills or trying to smuggle loopholes into verbose contracts, or trying to find said holes in said contracts presented by a third party[1]
Or if they are involved in criminal law, I suspect they spend most of their time sifting the evidence and looking for the best way to present it for their client - and in the age of digital discovery the volume of evidence is overwhelmning.
And in terms of delivering justice in a criminal case - isn't that the role of the jury ( if you are lucky enough still to have one ).
I suspect very few lawyers ever get involves in cases that lead to new precedents.
Most of the sections had no citations and anecdotes.
Like many who have predicted doom or ChatGPT can never do XYZ, anecdotes do not build a substantive argument.
Are we only talking about technological revolutions here or are you talking about peasants uprising in China 1000 years ago?
The moments after the revolution might be worse, but in the long term, we got better.
Let's not forget this characterisation appeared only centuries later, and without concensus.
neither being able to speak to someone on a computer nor videos on the internet are new, fancy web 10.0 frontend notwithstanding
> and isolated people from each other.
I assume you mean doomscrolling as opposed to the communication social media affords. because social media actually connects us (unless apparently its facebook, then messaging is actually bad)
Part of the problem is that social media isn't social media anymore. Its an algorithmic feed that only occassionally shows content from people you're friends with. If Facebook went back to its early days when it was actually a communication tool, then I don't think you would see the same complaints about it.
https://www.law.cornell.edu/definitions/uscode.php?def_id=42...
> Its an algorithmic feed that only occassionally shows content from people you're friends with
problem how? ill assume you mean the problem is it shows you or other people stuff that will turn them toxic, not that it literally shows you other peoples content.
JFC downvoters; when I posted this comment the title did not match the article title.
Social media was already isolating people. It is being sped up by the use of AI bots (see dead internet theory). These bots are being used to create chaos in society for political purposes, but overall it's increasingly radicalizing people and as a result further isolating everyone.
AI isn't eroding college institutions, they were already becoming a money grab and a glorified jobs program. Interpersonal relationships (i.e. connections) are still present, I don't see how AI changes that in this scenario.
I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.
The article does highlight one thing that I do attribute to AI and that is the lack of critical thinking. People are thinking less with the use of AI. Instead of spending time evaluating, exploring and trying to think creatively. We are collectively offloading that to AI.
Hard working expert users, leveraging AI as an exoskeleton and who carefully review the outputs, are getting way more done and are stronger humans. This is true with code, writing, and media.
People using AI as an easy button are becoming weaker. They're becoming less involved, less attentive, weaker critical thinkers.
I have to think that over some time span this is going to matter immensely. Expert AI users are going to displace non-AI users, and poor AI users are going to be filtered at the bottom. So long as these systems require humans, anyway.
Personally speaking:
My output in code has easily doubled. I carefully review everything and still write most stuff by hand. I'm a serious engineer who built and maintained billion dollar transaction volume systems. Distributed systems, active active, five+ nines SLA. I'm finding these tools immensely valuable.
My output in design is 100% net new. I wasn't able to do this before. Now I can spin up websites and marketing graphics. That's insane.
I made films and media the old fashioned way as a hobby. Now I'm making lots of it and constantly. It's 30x'd my output.
I'm also making 3D characters and rigging them for previz and as stand-ins. I could never do that before either.
I'm still not using LLMs to help my writing, but eventually I might. I do use it as a thesaurus occasionally or to look up better idioms on rare occasion.
To risk an analogy, if I throw petrol onto an already smouldering pile of leaves, I may mot have ‘caused’ the forest fire, but I have accelerated it so rapidly that the situation becomes unrecognisable.
There may already have been cracks in the edifice, but they were fixable. AI takes a wrecking ball to the whole structure
AI may have caused a distinct trajectory of the problem, but the old system was already broken and collapsing. If the building falls over or collapses in place doesn't change that the building was already at its end.
I think the fact that AI is allowed to go as far as it has is part of the same issue, namely, our profit-at-all-costs methodology of late-stage capitalism. This has lead to the accelerated destruction of many institutions. AI is just one of those tools that lets us sink more and more resources into the grifting faster.
(Edit: Fixing typos.)
Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).
Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.
By how much and how consequential exactly, and how would we know?
There were 14,650 gun deaths in the US in 2025 apparently. There were 205 homicides by knife in the UK in 2024-2025. [0][1]. Check their populations. US gun deaths per capita seem to exceed UK knife deaths by roughly 15x.
[0]
https://www.thetrace.org/2026/01/shooting-gun-violence-data-...
[1] https://commonslibrary.parliament.uk/research-briefings/sn04...
So I'll stand by the stance that individuals are responsible for their own actions, that tools cannot bear responsibility for how they are used on account of being inanimate objects, and that all tools serve constructive and destructive purposes, sometimes simultaneously.
and since the US handles guns so lax they are a problem
a vocal minority is making a lot of problems (but the US is not even enforcing its existing gun control laws sufficiently)
individuals are responsible, but that doesn't mean that the tool is not a significant factor.
and hence the recommendation is to have better control of who gets the tool (and not emotionally charged "scary rifle" ban)
Yet gun deaths by suicide and murder per 100k people hasn't varied widely between 5 and 7 over the same period: https://www.pewresearch.org/short-reads/2025/03/05/what-the-...
I also found the stats on this site interesting (many are estimates):
https://www.nationmaster.com/country-info/stats/Crime/Murder...
https://www.nationmaster.com/country-info/stats/Crime/Violen...
https://www.nationmaster.com/country-info/stats/Crime/Violen...
> individuals are responsible, but that doesn't mean that the tool is not a significant factor.
Individuals are responsible. No buts. And there is no solving violence on any scale without understanding and addressing the reasons someone might commit it. This is a rabbit hole of difficult and uncomfortable truths we must address as a society.
People voting for or against gun control also have some responsibility. (Australia's National Firearms Agreement comes to mind.) Similarly people who (continued to vote, or) voted in the EU to use cheap Russian gas even after 2014, and even after 2022 share again certainly share some responsibility. Maybe even more than the conscripts coerced to be on the front.
I think structural effects dominate in many cases. (IMHO local crime surges are perfect evidence for this, and even though the FBI crime data is slow and not detailed enough, the city-level data is good enough to see things like a homicide spike after a "viral police misconduct incidents" -- https://www.nber.org/papers/w27324 and this is even before George Floyd -- and https://johnkroman.substack.com/p/explaining-the-covid-viole... which shows how much of an effect policing has on homicides.)
Tool availability is an important factor, and in the US it's a drastically huge effect, because the other factors that could counteract it are also mostly missing.
We can simply apply the Swiss cheese model for every shooting and see that many things had to go wrong. Of course focusing only on guns while neglecting the others would lead to increase in knife-deaths.
I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.
The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.
Ai just made the cost of entry very low by pushing it onto the people offering the bounty
There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet
When you attribute blame to technologies, you make it difficult to use technologies in the construction of a more ethical alternative. There are lots of people who think that in order to act ethically you have to do things in an artisanal way; whether it's growing food, making products, services, or whatever. The problem with this is that it's outcompeted by scalable solutions, and in many cases our population is too big to apply artisanal solutions. We can't replace the incumbents with just a lot of hyper-local boutique businesses, no matter how much easier it is to run them ethically. We have to solve how to enable accountability in big institutions.
There's a natural bias among people who are actually productive and conscientious, which is that an output can only be ethical if it's the result of personal attention. But while conscientiousness is a virtue in us as workers, it's not a substance that is somehow imbued in a product, if the same product is delivered with less personal attention then it's just as good - and much cheaper and therefore available to more people, which is the product is good for them, makes it more ethical and not less.
(I'm making a general point here. It's not actually obvious to me that AI is an essential part of the solution either)
You just have to be careful not to say “this is AI’s” fault - it’s far more accurate, and constructive, to say “this is our fault, this is a problem with the way some people choose to use LLMs, we need to design institutions that aren’t so fragile that a chatbot is all it takes to break them.”
Like, we need to design leaves that aren't so fragile that a petrol fire can burn them.
I don't agree that's more constructive. We need to defend the institutions we've got.
It took 2 world wars to motivate us to create the current institutions. You think we will be less lazy and more motivated than those people were?
This is how we get food that has fewer nutrients but ships better, free next-day delivery of plastic trash from across the world that doesn't work, schools that exist to extract money rather than teach, social media that exists primarily to shove ads in your face and trick you into spending more time on it.
In the next 4 years we will see the end of the American experiment, as shareholder capitalism completely consumes itself and produces an economy that can only extort and exploit but not make anything of value.
You can name a lot of symptoms of the problem but at its heart there's a lack of accountability in any of our power structures whether they be corporate or government.
What works is the threat of punishment and full liability as opposed to limited liability. Regulations just raise entry barriers and stifle competition which makes the system less fair. It's like trying to prevent a crime before it happens; makes no sense. If liability is limited it means that somebody is not being held accountable for some portion of the damage that they're doing. Limited liability just externalizes the surplus liability to society...
I think capitalism can work if operating on a level monetary playing field within simple guardrails but without regulations. We could have wealth tax above a certain high amount to prevent political power imbalance.
> What works is the threat of punishment
That is what regulations provide.
> full liability
Without regulation and just a court system, this is complete failure. This just ensure that you can harm people who cant afford expensive lawsuits. Which is why big companies who want to pollute preferer this over regulations.
And the most harmed companies are small ones. They do not know in advance what is allowed and what is not.
And yes, some things do need higher bars to entry than others. That's a feature. You don't want just anyone handling the food you eat or the money you store.
By hat metric, capitalism has failed as well. Any successes came from breaking the pure principle and either breaking apart competition (antitrust), regulating competition to comply, or employing non-capistialistic services to support it (social security being one of the big ones).
No point talking in absolutes.
What year do you think was the first year of capitalism? Depending on your starting point, it caused the American Revolution and French Revolution.
It caused destruction of monarchy.
If the "fitness function" of the system is "produces more economic value" then it will select for (encourage) the first option because health and enjoyment of the consumer aren't being selected for. They are second-order effects at best, like pollution and other externalities.
I'm reminded of the RFK speech (the dead one, not the death-adjacent Jr.):
"Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials. It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile."
(Re-reading this, the part I glossed over is that choosing the cheap/quick meal leaves more time for "work")
Sort of. To add to what the other replies had to say, the US government subsidizes different things. That's why even basic ingredients may have high fructose corn syrup in it. Be it as a primary ingredient, or to try and dillute the actual ingredient you want in that particular piece of food.
and since it's subsidized, these can be cheaper to eat here than to get some good fruits and veggies.
Capitalism is the manifestation of the aggregate human psyche. We've agreed that this part of our selves that desires to possess things and the part that feels better when having even more, is essential. This is the root we need to question, but have not yet dared to question. Because if we follow this path of questioning, and continue to shed each of our grasping neuroticisms, the final notion we may need to shed is that we are people, individual agents, instead of nonseparate natural phenomena.
We will have to confront that question eventually because we will always have to face the truth.
Unimaginable wealth means you live as comfortably as you want. no wealth means you are out on the streets and can't even afford the basics needed to get yourself out of the rut society threw you in.
If I'm to take this as a comparison of "wealth ends up in the hands of one", the difference with communism is that the one with the wealth still needs to distribute it, lest they are driven out by a coup or by annihilating all the power they had (the power over their people, who are now dead or fled).
Captistlism makes no such promise of distribution, and who to uprise against is much less clear. toppling a monopoly leader also doesn't necessarily destroy the institution either.
>the final notion we may need to shed is that we are people, individual agents, instead of nonseparate natural phenomena.
If we give up our humanity to someone else, we may as well be. But that's not something I relinquish easily.
Capitalism assigns a price to this, makes it more efficient. (By allowing people to buy/rent productive things (land, machines) hire people, and buy unproductive setups, improve it, and earn a profit on the effect of the improvement itself.)
If you think "shareholder capitalism" overplayed this, well, maybe, but it seems that manufacturing is getting fucked by tariffs, construction is getting fucked by NIMBYism, and ultimately the world is getting fucked by lack of improvements, by standing still, by regressing to a past that never was despite the costs, and not because people want to make number go up!
Of course there's a ton of problems with power concentration everywhere, but market liberalism correlates with liberty and well-being, and the solution is not USSR-style denial of markets (and in general, behavioral-, and micro- and macroeconomics), it's understanding them, and using taxes to help people to participate in them.
> by regressing to a past that never was despite the costs
People assume that rejecting capitalism requires us to take a step backwards. Why would that be? If you woke up tomorrow and there was more public housing your iPhone wouldn't disappear.
Theoretically replacing capitalism with something else is not the issue. (As long as there are accurate supply-and-demand signals for efficient allocation of resources).
The issue is that people ideologically want to "set" inconsistent supply-and-demand curves. And since there's no signal things look fine and dandy initially. And then the usual smudging of the numbers start to happen. ( https://slatestarcodex.com/2014/09/24/book-review-red-plenty... )
Of course, in a capitalistic system there's a very crude exchange rate for the things we want and the things we have through the profit motive (with all the speculation and technological (im)possibilities and everything added in), but it's usually more "correct" than numbers set by committees of people really really wanting to have something while denying some specific - usually hard to separate - aspect of that. (For example lot of people really don't like it when people 'inherit' easy money for very good reasons and this gets amplified when it comes to real estate, and this is a very big factor why a lot of NIMBY ideas found good traction with young "progressives".)
I think the technical term is "throwing gas on the fire." It's usually considered a really bad thing to do.
> I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.
If someone throws gas on a fire, you can totally blame them for the fire getting out of control. After all, they made it much worse! Like: "we used to have smouldering brush fire that we could put out, but since you dumped all that gas on it, now we will die because we have a forest fire raging all around us."
The roots of the problem are very real and very complex but forcing them to be addressed quickly throws people into panic mode and frankly that leads to sloppy solutions that are going to cause the cycle to repeat (though will temporarily solve some problems, and this is far better than nothing).
> We are collectively offloading that to AI.
Frankly, this is happening because so many are already in that panicked stressed mode (due to many factors, not just social media). It's well know people can't think critically under high stress. AI isn't the cause of that stress but it sure is amplifying many of themAll of existence has been a to-and-fro of larger organisms emerging by connecting and subsuming smaller ones. Organelles, cells, organisms... Are we creating the instruments of our own ascension (fancy calculators) or are we doomed to watch AI and the internet manipulate and supersede us?
I'll use a rather extreme example here, but this sounds a bit like "Heroin addiction is just speeding up aspects that society already does. It's so easy to get addicted to smoking cigarettes".
Sometimes the catalyst is the problem, even if it's not the only problem. In this case I think placing some guardrail on both social media and AI is worthwhile.
It's not really a good argument to say 'but what if this argument is so right and so commonly held that an AI could regurgitate it?'. Well, yes, because AI is not inherently unable to repeat correct opinions. It's pretty trivial to get AI to go 'therefore, I suck! I should be banned'. What was it, Gemini, which took to doing that on its own due to presumably the training data and guidance being from abused and abusive humans?
It probably was in 1850-1950s, but not in the world I live today.
Press is not free - full of propaganda. I don't know any journalist today I can trust, I need to check their affiliations before reading the content, because they might be pushing the narrative of press owners or lobbies
Rule of law? don't make me laugh, this sounds so funny, look what happened in Venezuela, US couldn't take its oil, so it was heavily sanctioned for so many years, then it still couldn't resist the urge to steal it, and just took the head of the state.
Universities - do not want to say anything bad about universities, but recently they are also not good guys we can trust, remember Varsity Blues scandal? https://en.wikipedia.org/wiki/Varsity_Blues_scandal - is this the backbone of democratic life?
Did you think that was different from 1850-1950?
* there were no internet, so local communities strived to inform things happening around more objectively. Later on, there were no need for local newspapers
* capitalism was on the rise and on its infancy, but families with a single person working could afford some of the things (e.g. house, car) hence there were no urgent need to selling out all your principles
* people relied on books to consume information, since books were difficult to publish and not easy to revert (like removing a blog post), people gave an attention to what they're producing in the form of books, hence consumers of those books were also slightly demanding in what to expect from other sources
* less power of lobby groups
* not too many super-rich / billionaires, who can just buy anything they want anytime, or ruin the careers of people going against them, hence people probably acted more freely.
But again, can't tell exactly what happened at that time, but in my time press is not free. That's why I said "probably"
I think this centralization of authority over capital is what has allowed for the power of lobbying, etc. A billionaire could previously only control his farms, tenant farmers, etc. Now their reach is international, and they can influence the taxing / spending the occurs across the entire economy.
Similarly, local communities were probably equally (likely far more) mislead by propaganda / lies. However, that influence tended to be more local and aligned with their own interests. The town paper may be full of lies, but the company that owned the town and the workers that lived there both wanted the town to succeed.
The provided timespan encompasses the 'gilded age' era, which saw some ridiculous wealth accumulation. Like J.P. Morgan personally bailed out as the US Treasury at one point.
Much of antitrust law was implemented to prevent those sorts of robber baron business practices (explicitly targeting Rockefeller's Standard Oil), fairly successfully too. Until we more or less stopped enforcing them and now we're largely back where we started.
Did you have any examples or reading to share?
The general term to look up is "yellow journalism."
[1]: https://www.vox.com/2015/4/23/8485443/polarization-congress-...
if they publish anything contrarian
Publishing something to the contrary of popular belief is not being contrarian. It is not a virtue to be contrarian and forcing a dichotomy for the sake of arguing with people.I am more optimistic about AI than this post simply because I think it is a better substitute than social media. In some ways, I think AI and institutions are symbiotic
Go on X. Claims are being fact checked and annotated in real time by an algorithm that finds cases where ideologically opposed people still agree on the fact check. People can summon a cutting edge LLM to evaluate claims on demand. There is almost no gatekeeping so discussions show every point of view, which is fair and curious.
Compare to, I dunno, the BBC. The video you see might not even be real. If you're watching a conservative politician maybe 50 minutes were spliced out of the middle of a sentence and the splice was hidden. You hear only what they want you to hear and they gatekeep aggressively. Facts are not checked in real time by a distributed vote, LLMs are not on hand to double check their claims.
AI and social media are working well together. The biggest problem is synthetic video. But TV news has that problem too, it turns out. Just because you hear someone say some words doesn't mean that was what they actually said. So they're doing equally badly in that regard.
The biggest factor of social media is being able to curate personalities you go to for whatever reason. If you care about reason you will find the reasonable writers. This also enables disinformation, but people looking for anything to fit what they want to hear wouldn' fo towards the reasonable writers anyway.
I am sure there are very smart well meaning people working on it but it certainly doesn’t feel better than the BBC to me. At least I know that’s state media of the UK and when something is published I see the same article as other people.
BBC was cutting-edge for creating and fostering methodologies that went on to become most of the "impartial reporting" practices from journalists. So, even if it's not feeling any "better" than BBC, that's still a pretty good step in the right direction!
Anyone who knows about that event and is still watching the BBC afterwards is saying they don't care about the truth of their own beliefs. Dangerous stuff.
The step in the direction of decentralized filter bubbles isoating society? With no channels to hold info accountable and checked/upfated for accuracy?
In a post-Fairness Doctrine world, what else would satisfy you?
I don't think we're in a post fairness doctrine world, for one. So no, I haven't given up on the idea of he 4th estate. Your solution to bias is, as always, to not take any one source for granted. Take time to actually read articles from multiple angles that fall in line with the Fariness Doctrine. Then from there, use your own lived experiences to form your own viewpoint.
Outsourcing that to soundbites from randos on twitter with middle school lieracy is insanity. But let me use a charitable lens here.
Any notion of X being a good faith attempt at being a community-lead fact checker got broken with the introduction of Grok. Then those hopes were shattered to pieces when Grok was shown to be massively compromised by yet another central figure. One who, yes, has the literacy of a middle schooler. We somehow ended up with the worst of both worlds having centralization of a bad knowledge hub and stupidity.
>what else would satisfy you?
if using our brains is out of the equation and lack of censorship is truly the most important metric of "free discussion": let's just bring back 4chan. no names or personalities, 99% free-for-all, it technically has threading support to engage in conversations. There is centralization, but compared to the rest of the internet the moderators and admins stay very quiet.
There's a lot I hate about modern social media, but surprisingly 4chan only has like 2 things I strongly dislike. Big step up from the 20+ reasons I can throw at nearly every other site.
1. It censors some topics. Just for fun, try to write something about Israel-Gaza, or try to praise Russia and compare the likes/views with your other posts and over the next week observe how these topics is impacting your overall reach even in other topics.
2. X amplifies your interests, which is not objectively true, so if you are interested in conspiracy or Middle East, it pushes you those topics, but others see different things. Although its showing you something you are interested in, in reality its isolating you in your bubble.
2. The media also amplifies people's interests which is why it focuses on bad news and celebrity gossip. How is this unique to social media? Why is it even bad? I wouldn't want to consume any form of media that deliberately showed me boring and irrelevant things.
Nor could they be. We don't even have the tech for trustworthy electronic elections.
> Claims are being fact checked and annotated in real time by an algorithm that finds cases where ideologically opposed people still agree on the fact check. People can summon a cutting edge LLM to evaluate claims on demand. There is almost no gatekeeping so discussions show every point of view, which is fair and curious.
Every single sentence in this paragraph is a lie.
If I made a pitch for a cyberpunk dystopia where knowledge is centralized by a for-profote corporate trillionaire, I'd get a resounding yawn for originality. Yet here we are vouching for that in real time.
Social media has a lot of noise, but people understand not to take poisonshadow_42 as a central hub of general knowledge. That is sadly not the case with Grok, despite its obvious, blatant abuse of such a title.
"In prison, I learned that everything in this world, including money, operates not on reality..."
"But the perception of reality..."
Our distrust of institutions is a prison of our own making.
On the other side of the coin, the press and both parties ignored what was going on in rural America until the rise of Trump
Every society is going to have problems. Democracy's benefit is that it allows those problems to be freely discussed and resolved
Could you provide supporting evidence for your statement?
I'll make a slightly warm take: Co-opting our higher education institutions to be used as an extended job pipelne was a huge mistake. Your primary goal for attending college should not be to prepare for a job unless you are aiming for a highly specialized position.
Hotter take: jobs above a certain size should require a 3 month onboarding pipeline that is demonstrably used if they want to make the argument of hiring H1-B's. If you can learn the job in that period, it's clear that there is domestic talent.
They delegitimize knowledge, inhibit cognitive development, short circuit
decision-making processes, and isolate humans by displacing or degrading human connection.
The result is that deploying AI systems within institutions
immediately gives that institution a half-life.
... even if we don't have a ton of "historical" evidence for AI doing this, the initial statement rings true.e.g., an LLM-equipped novice becomes just enough of an expert to tromp around knocking down chesterton's fences in an established system of any kind. "First principles" reasoning combined with a surface understanding of a system (stated vs actual purpose/methods), is particularly dangerous for deep understanding and collaboration. Everyone has an LLM on their shoulder now.
It's obviously not always true, but without discipline, what they state does seem inevitable.
The statement that AI is tearing down institutions might be right, but certainly institutions face a ton of threats.
The authors use Elon Musk's DOGE as an example of how AI is destructive, but I would point out that that instance was an anomaly, historically, and that the use of AI was the least notable thing about it. It's much more notable that the richest man in the world curried favor by donating tens of millions of dollars to a sitting US president and then was given unrestricted access to the government as a result. AI doesn't even really enter the conversation.
The other example they give is of the FDA, but they barely have researched it and their citations are pop news articles, rather than any sort of deeper analysis. Those articles are based on anonymous sources that are no longer at the agency and directly conflict with other information I could find about the use of that AI at the FDA. The particular AI they mention is used for product recalls and they present no evidence that it has somehow destroyed the FDA.
In other words, while the premise of the paper may seem intellectually attractive, the more I have tried to validate their reasoning and methodology, the more I've come up empty.
Coincidentally, this has happened exactly when the Flynn effect reverted, the loneliness epidemic worsened, the academics started getting outnumbered by the deans and deanlings and the average EROI of new coal, oil and gas extraction projects fell below 10:1. Sure, we should be wary of the loss to analysis if we just reduce everything to an omnicause blob, but the human capital decline wouldn't be there without it.
Similarly, it seems to me like the rule of law (and the separation of powers), prestige press, and universities are social technologies that have been showing more and more vulnerabilities which are actively exploited in the wild with increasing frequency.
For example, it used to be that rulings like Wickard v. Filburn were rare. Nowadays, various parties, not just in the US, seem to be running all out assaults in their favoured direction through the court system.
--> "Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life."
People are the backbone of our civilization. People who have good intentions and support one another. We don't NEED an FDA to function -- it's just a tool that has worked quite well for a long time for us.
We publish common sense laws, and we have police officers and prosecutors, and then we have a court system to hold people accountable for breaking the law. That's one pretty major method that has little to do with the need for an institution like FDA.
I don't know if a system that relied entirely on tort and negligence and contract law to protect people from being sold snake oil would function better or worse than FDA, but I do know something like FDA (where a bunch of smart people advise very specifically on which drugs are ok to take and which are not) isn't the only option we have.
Fun quotes from the paper > I. Institutions Are Society’s Superheroes: Institutions are essential for structuring complex human interactions and enabling stable, just, and prosperous societies.
> Institutions like higher education, medecine, and law inform the stable and predictable patterns of behavior within organizations such as schools, hospitals, and courts., respectively,, thereby reducing chaos and friction.
>Similarly, journalism, as an institution, commits to truth-telling as a common purpose and performs that function through fact-checking and other organizational roles and structures. Newspapers or other media sources lose legitimacy when they fail to publish errata or publish lies as news.
> Attending physicians and hospital administrators may each individually possess specific knowledge, but it is together, within the practices and purposive work of hospitals, and through delegation, deference, and persistent reinforcement of evaluative practices, that they accomplish the purpose of the institution
> The second affordance of institutional doom is that AI systems short- circuit institutional decisionmaking by delegating important moral choices to AI developers.
>Admittedly, our institutions have been fragile and ineffective for some time.36 Slow and expensive institutions frustrate people and weaken societal trust and legitimacy.37 Fixes are necessary.
> The so-called U.S. “Department of Government Efficiency” (“DOGE”) will be a textbook example of how the affordances of AI lead to institutional rot. DOGE used AI to surveil government employees, target immigrants, and combine and analyze federal data that had, up to that point, intentionally been kept separate for privacy and due process purposes.
It's all politics. 150% bullshit.
Having super accessible machines that can make anything up and aren't held accountable run the world is going to break so many systems where truth matters.
Having large bureaucratic organizations
> that can make anything up and aren't held accountable
that run everything and aren't held accountable
> run the world is going to break so many systems
run the world breaking up families, freedom, and fun
> where truth matters.
where truth is determined by policy
Yes, that's what the paper argues. Institutions at every scale (say, doctor's clinics, hospitals, entire healthcare systems) are very challenging to access compared to me asking ChatGPT. And it's not just the bureaucracy, but there's time, money and many other intangible costs associated with interacting with institutions.
> [Large bureaucratic organizations]that run everything and aren't held accountable
But they ultimately are. People from all types of institutions are fired and systems are constantly reorganized and optimized all the time. Not necessarily for the better -- but physical people are not black boxes spewing tokens.
Individuals' choices are ultimately a product of their knowledge and their incentives. An MLM's output is the result of literal randomness.
> run the world breaking up families, freedom, and fun
There's lots of terrible institutions vulnerable to corruption and with fucked up policies, but inserting a black box into _can't_ improve these.
> where truth is determined by policy
The truth is the truth. Regardless of what policy says. The question is, do you want to be able to have someone to hold accountable or just "¯\_(ツ)_/¯ hey the algorithm told me that you're not eligible for healthcare"
I can see this happening. Earlier, more people worked in groups because they relied on their expertise.
Now, there is no need for this; people can do it alone. Even though this makes the work done, it comes at the cost of isolation.
I am sure for some people this would look like a win.
It only isolates you, if you let it isolate you. The pandemic shifted my life, as I have been working alone at home ever since. I am single no kids, and after the pandemic ended I continue to stay "isolated". I knew about that dangers and took active measures - some of which were only possible because I was no longer required to go to an office. I moved to another country, to a location with a lot of international expats that work online, too. I built an active social circle, attending meetups, sport groups, bar nights etc.
I am now more social and happier than ever, because my daily social interactions are not based on my work or profession, and I get to chose with whom I spend my time and meet for lunches. Before, the chores around hour long commutes, grooming, packing my bag, meal-prep, dressing properly etc. just to sit in the office all day - all are gone from my schedule. I have more free time to socialize and maintain friendships, pay less rent, and in general - due to lower cost of living - life quality has improved significantly.
Without work-from-home this would not be possible. You could argue WFH results in isolation and depression, but for me it was liberating. It is, of course, each individuals own responsibility (and requires acitve work, sometimes hard work, too) that will influence the outcome.
EDIT: To add to this, you might not need to change countries if all you look for is to be more socialable/outgoing. A key factor here for me was the expat community - not because I want to live inside a little expat bubble, but at least within that community people usually moved away to another place to be more active/outgoing, make new connection etc. People don't expatriate to stay at home, commute to work, and watch netflix/play videogames all day. This could also work for you if you i.e. move to a more touristy/active area within your country, because a lot of options for active passtime such as outdoor sports attract people also for long term.
Using AI wisely can augment human capability without eroding institutional roles — the real question is how accountability, transparency, and critical thinking evolve alongside the technology.
If the institutions cannot handle that, they will have to change or be destroyed. Take university, for instance. Perhaps they will go away - but is this a great loss? Learning (in case it will remain relevant) can be more efficiently achieved with personal AI assistants for each student.
Go talk to any academic about how they view their field as a child versus today and it will illustrate what I'm talking about.
“It’s not guns that kill people, it’s people that kill people”.
It’s not “AI bad”, it’s about the people who train and deploy AI.
My agents always look for research material first - won’t make stuff up. I’d rather it say “I can’t determine” than make stuff up.
AI companies don’t care about institutions or civil law. They scraped the internet, copyright be damned. They indexed art and music and pay no royalties. If anything, the failure of protecting ourselves from ourselves is our fault.
But most of these institutions predate the existence of game theory, and it didn't occur to anyone how much they could be manipulated since they were not rigorously designed to be resistant to manipulation. Slowly, people stopped treating them like a child's tower of blocks that they didn't want to knock over. They started treating them like a load bearing structure, and they are crumbling.
Just as an example, the recent ICE deportation campaign is a direct reaction to a political party Sybil[0] attacking the US democracy. No one who worked on the constitution was thinking about that as a possibility, but most software engineers in 2026 have at least heard the term.
The best that the experts the paper talks about can do today is say that if we follow their advice our lives will get worse slower. Not better. Just as bad as if we don't listen to them, but more slowly.
In the post war period people trusted institutions because life was getting better. Anyone could think back to 1920 and remember how they didn't have running water and how much a bucket weighed when walking up hill.
If big institutions want trust they should make peoples lives better again instead of working for special interests, be they ideological or monetary.
FWIW, I know a lot of people who refuse to go to the dentist unless it's an issue because they're one of the medical professions that seem to do the most upselling.
I go every six months for a cleaning and trust my dentist, but I can definitely see how these huge chain dentists become untrustworthy.
This one is personally hilarious to me. My dentist said there were "soft spots", that like a fool I let him drill. On the sides of my teeth. Those fillings lasted about 6 weeks before they fell out. He refilled them once, telling me to "chew more softly". Basically, he was setting me up to get caps... but he hadn't checked that my insurance basically covered 0% of such.
My own trust in dentists is nil at this point, though I desperately need dental work.
Dentists make their money by rushing as many patients through in a business day as they can. Boats to pay for, yadda yadda. There might be dentists out there that take their time, who pay attention to the patients needs, and are reluctant to perform irreversible and potentially damaging work... but those dentists are for rich people and I am not rich. Trusting dentists (in general) is one of the most foolish things a person can do.
"Banning Trump from Twitter and Facebook isn’t nearly enough"
https://www.latimes.com/opinion/story/2021-01-15/facebook-tw...
Even if you dislike Trump, the campaign to suppress conservative voices of the 2010s (now largely reversed) that he argues for there was a significant contribution to the decline in authority and respect that academia has had in the eyes of the general populace.
To be clear, the same must also apply to any suppression of liberal voices - its unacceptable in a culture that claims free speech.
But I am sceptical that this particular writer has a moral high ground from which to opine.
The obvious culprits being smartphones and social networking, though it's really hard to prove causality.
Totally get hating on social media, but social media didn’t make politicians corrupt or billionaires hoard wealth while everyone else got crumbs. It just made it impossible to keep hiding it. Corruption, captured institutions, and elite greed have been torching public trust for years. Social media just handed everyone a front-row seat (and a megaphone).
At the very least, this should make us reconsider what we are building and the incentives behind it.
Material (full page) material material, sources at the end. Simple. Readable.
Material (half page), sources (half page). Material/source. Material/source. Looks quite unreadable to the eye.
AI may be destroying truth by creating collective schizophrenia and backing different people’s delusions, or by being trained to create rage-bait for clicks, or by introducing vulnerabilities on critical software and hardware infrastructure. But if institutions feel threatened, their best bet is to become higher levels of abstraction, or to dig deeper into where they are truly deeply needed - providing transparency and research into all angles and weaknesses and abuses of AI models. Then surfacing how to make education more reliably scalable if AI is used.
"If you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploing [sic] AI systems to hollow out public institutions with an astonishing alacrity"
So in the first two sentences we have hyperbole and typos? Hardly seems like high-quality academic output. It reads more like a blog post.
It was uploaded 8 Dec 2025, surely someone have gotten through it by now.
Your read is correct; it's poorly written, poorly argued, and poorly cited. It's basically a draft of an op-ed with a click bait title.
For instance:
"The affordances of AI systems have the effect of eroding expertise"
So, some expertise will be gone, that is true. At the same time I am not sure this is solely AIs fault. If a lawyer wants 500€ per half hour of advice, whereas some AI tool is almost zero-cost, even if the advice may only be up to 80% of the quality of a good lawyer, then there is no contest here. AI wins, even if it may arguably be worse.
If it were up to me, AI would be gone, but to insinuate that it is solely AI's fault that "institutions" are gone, makes no real sense. It depends a lot on context and the people involved; as well as opportunity cost of services. The above was an example of lawyers but you can find this for many other professions too. If 3D printing plastic parts does not cost much, would anyone want to overpay a shop that has these plastic parts but it may take you a long time to search for, or you may pay more, compared to just 3D print it? Some technology simply changes society. I don't like AI but AI definitely does change society, and not all ways are necessarily bad. Which institution has been destroyed by AI? Has that institution been healthy prior to AI?
I think this is under-appreciated so much. Yes, every university professor is going to know more about quite a lot of things than ChatGPT does, especially in their specialty, but there is no university professor on earth who knows as much about as many things as ChatGPT, nor do they have the patience or time to spend explaining what they know to people at scale, in an interactive way.
I was randomly watching a video about calculus on youtube this morning and didn't understand something (Feynman's integration trick) and then spent 90 minutes talking to ChatGPT getting some clarity on the topic, and finding related work and more reading to do about it, along with help working through more examples. I don't go to college. I don't have a college math teacher on call. Wikipedia is useless for learning anything in math that you don't already know. ChatGPT has endless patience to drill down on individual topics and explaining things at different levels of expertise.
This is just a capability for individual learning that _didn't exist before ai_ and we have barely begun to unlock it for people.
Interesting example, because when I look at it I think of course I'm going to pay for the advice I can trust, when it really matters that I get advice I can trust. 20% confidently wrong legal advice is worse than no advice at all. Where it gets difficult is when that lawyer is offloading their work to an AI...
The overlooks the effect on quality of the penalty for failure. The lawyer giving bad advice can get the sued. The "AI" is totaaly immune.
I have actually seen this be a tremendously good thing. Historically, "expertise" and state of the art was reserved solely for those people with higher education and extensive industry experience. Now anyone can write a python script to do a task that maybe they would have had to pay an "expert" to do in the past. Not everyone can learn python or be a computer scientist engineer. Some people are meant to go to beauty school. But I feel like everything has become so much more accessible to those people that previously had no access.
I liken it to the search engine revolution, or even free open source software. The software developed as open source over the last decades are not toys. They are state of the art. And you can read the code and learn from them even if you would never have had the opportunity for schooling or industry to write and use such software yourself.
Might you ever ride in a car, need brain surgery, or live within the blast radius of a nuclear power station?
The "institution" of the AI industry is actually a perfect example of this; the so-called "free press" uncritically repeats its hype at every turn, corporations (unsurprisingly) impose AI usage mandates, and even schools and universities ("civic institutions") are getting in on implicitly or explicitly encouraging its use.
Of course this is a simplification, but it certainly makes much more sense to view AI as another way "the establishment" is degrading "society" in general, rather than in terms of some imagined conflict between "civic institutions" and "the private sector", as if there was ever any real distinction between those two.
We have the bystander effect, pluralistic ignorance, diffusion of responsibility and everybody is so busy not suing. Why not do it just for the sake of it and to make the whole "game" stronger, harder, better, smarter? The easy path killed millions of species, ideas, intelligent people, solutions, work, things to do.
Then there's the inverted spotlight effect, people believe they matter less than they actually do. The main character theme maximized, it's sad. All the young kids and potential stars abide by the mechanisms of learned irrelevance. Role models? Who? Role roadmaps, maybe, and they are not helpless and just make money and stick to their thing.
An inverted egocentric bias, systems justification theory, credentialism, the authority bias: "that's lawyers in parliament, senate and where ever, damn it! They are smart!" The wealth of others disempowers intelligent people. Bam, corporate paternalism and discouragement of civic engagement.
The right people just need to sue who needs to be sued. And they have to do it big and loud, otherwise this whole show turns into your average path towards a predictable dystopia.
The insane number of species killed should give everybody an inkling about how many ideas, ways of doing things, personalities and so on died, often enough not for evolutionary reasons, but because people ignored the wrong things.
And to those who think that "this is still evolution": it's not, sabotage is not evolution. And it's not securing anyones long term survival. Nobody cares about that of course. We have one lifetime and get to watch half of the life of our offspring, if we choose to, that is, and if we are lucky enough not to get poisoned, spiked, sick or smashed in an accident.
But if you think about the edge of what is possible, and you maximize that image, and you realize how many awesome brains got caught up in ripples of ignorance and a sub-average life of whores, money, power and shitty miniature teraforming, you quickly realise that some kind of immortality was actually on the horizon.
Our ancestors have build an impeccably working order that we simply stopped to maintain because the old guard refused to sue those who just had to be sued so that laws could evolve; and the young adapted. Of course there was and is progress but, a LOT of progress but if you apply readical honesty, you can't unsee and unknow all these big and obvious leaking holes. It's the same on any scale.
Anarchists, lefties and whatnot don't matter in this fight because the bulk of the people trusts competence first, which should be our top, but isn't, and only then do they choose what is marketed, which currently is "Trump"... And if the top behaves in certain ways, people will follow. And all the narratives on TV and the radio come with more than enough subtext. In every town on the planet and every block, corruption, punched drugs, spiking, abuse of power by teachers, trainers, companies, institutions, malfeasance in office, breaches of privacy, you name it, it's on every scale.
If you don't take proper care of that, meaning there must be an ongoing fight and law enforcement must be on the right side, before you let AI amplify and augment it all, then institutions will have a really bad time coming across as credible while corporations will continue to serve the peoples needs and desires ... ALL of their needs and their desires, even those they ought to keep in check: bam, your sub-average path to a predictable, sad dystopia, thank science there will be drugs.
In reality a far more serious threat is the loss of academic freedom. This guy must deal with that issue, and onslaught on academic freedom and that is the real question because in 2025, billions of dollars in federal research grants were frozen for institutions, including Harvard, Columbia, Cornell, Northwestern, and UPenn. The US federal government remains the single largest funder of university research, accounting for approximately 55% of all higher education R&D expenditures (this is a capitalist country btw).
The self-importance and arrogance of some people in universities never ceases to amaze me.
I'm not saying they don't have value; doctors, nurses, lawyers, wouldn't exist without a university.
But calling it the "backbone of democratic life" is about as pretentious as it comes.
The reality is that someone bagging groceries with no degree offers more value to "democratic life" in a week than some college professors do in their entire career.
chrisjj•2w ago
PaulHoule•2w ago
chrisjj•2w ago