frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

VS Code extension for Claude Code is now generally available

https://twitter.com/claudeai/status/2013704053226717347
1•tosh•1m ago•0 comments

Trust AI, but Verify

https://jordivillar.com/blog/trust-but-verify
1•eatonphil•2m ago•0 comments

Show HN: Belgi – deterministic acceptance pipeline for LLM outputs

https://github.com/belgi-protocol/belgi-playground
2•sovsparrow•3m ago•1 comments

Digital mental health apps vs. traditional therapy

https://medium.com/@6thMind/digital-mental-health-apps-vs-traditional-therapy-when-the-app-delive...
2•smanuel•3m ago•0 comments

Show HN: MacShip – Licensing and distribution SDK/Boilerplate for non-MAS apps

2•macship•3m ago•0 comments

Ping2Pay: Making Crypto Payments as Easy as a WhatsApp Message

https://vansham-kamboj.github.io/Ping2Pay/
1•yashsm01•3m ago•1 comments

SmartOS

https://docs.smartos.org/
1•ofrzeta•3m ago•0 comments

A-12 Full Pressure Suit

https://www.cia.gov/legacy/museum/artifact/a-12-full-pressure-suit/
1•keepamovin•4m ago•0 comments

Show HN: Stop screenshotting competitor emails. AI does the analysis

https://newsletrix.com/
1•arzzen•4m ago•0 comments

Show HN: Cooking Compass, decide what to cook with what you have

https://cookingcompass.netlify.app/
1•wesselthart•4m ago•0 comments

The new Honda logo is a delightful throwback

https://www.creativebloq.com/design/logos-icons/the-new-honda-logo-is-a-delightful-throwback
1•vinhnx•4m ago•0 comments

Ark invest's Big ideas 2026

https://www.ark-invest.com/big-ideas-2026
1•salkahfi•6m ago•1 comments

Skip Is Now Free and Open Source

https://skip.dev/blog/skip-is-free/
2•dayanruben•6m ago•0 comments

Applying 12-Factor Principles to Coding Agent SDKs

https://www.youtube.com/watch?v=qgAny0sEdIk
1•musha68k•8m ago•0 comments

GPQA and HLE Are Broken

https://zenodo.org/records/18293568
1•whwhyb•9m ago•1 comments

Show HN: Burnt out and failing, I built an AI that gives a shit

1•kaufy•9m ago•1 comments

Benchmark Comparison: JSONL vs. TOON output for JSON-render efficiency

https://github.com/vercel-labs/json-render/issues/33
1•lafalce•10m ago•0 comments

Pull requests with LLM attribution are predatory behavior

https://127001.me/post/llm-attribution-predatory/
2•koiueo•10m ago•0 comments

Show HN: I built an AI book recommender in 2 days

https://mynextbook.ai
1•PouyaRZ•10m ago•0 comments

Calico Basin Scrambling

https://xorvoid.com/2026_01_calico_basin_scrambling.html
1•ibobev•10m ago•0 comments

Time in C++: C++20 Brought Us Time Zones

https://www.sandordargo.com/blog/2026/01/21/clocks-part-8-cpp20-timezones
2•ibobev•11m ago•0 comments

FoundationDB's versionstamps should be everywhere

https://fragno.dev/blog/versionstamps
2•WilcoKruijer•12m ago•0 comments

Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets

https://github.com/borenstein/yolo-cage
3•borenstein•13m ago•0 comments

Everything Gen Z needs to know about the 2025 tech landscape

https://stackoverflow.blog/2026/01/14/gen-z-wrapped-2025/
1•BerislavLopac•14m ago•0 comments

Show HN: I made a roguelike game playable over SSH

https://dev-dungeon.com
3•viiralvx•14m ago•0 comments

Scott Bessent calls Denmark "irrelevant", is not concerned by Treasury sell-off

https://www.cnbc.com/2026/01/21/bessent-davos-denmark-greenland-treasuries.html
2•maxloh•15m ago•1 comments

100x a Business with AI

https://twitter.com/vasuman/status/2010473638110363839
1•gmays•15m ago•0 comments

libcurl memory use some years later

https://daniel.haxx.se/blog/2026/01/21/libcurl-memory-use-some-years-later/
3•TangerineDream•17m ago•0 comments

The Oligarchs Pushing for Conquest in Greenland

https://newrepublic.com/article/205102/oligarchs-pushing-conquest-greenland-trump
3•afavour•18m ago•0 comments

The Confabulations of Oliver Sacks

https://nautil.us/the-confabulations-of-oliver-sacks-1262447/
2•bookofjoe•19m ago•1 comments
Open in hackernews

How AI destroys institutions

https://cyberlaw.stanford.edu/publications/how-ai-destroys-institutions/
228•JeanKage•1h ago

Comments

chrisjj•1h ago
True title: How AI Destroys Institutions
PaulHoule•1h ago
"How" automatically gets chopped off as a prefix for HN submissions although you can usually edit and put it back.
chrisjj•1h ago
Indeed. Crazy, esp. given the guidelines say "please use the original title".
PaulHoule•1h ago
I would say "Civic institutions function in ways that degrade and are likely to destroy ... civic institutions"
inanutshellus•1h ago
Cute, but that's an ineffective belittlement of his argument, not only because it's irrelevant but because he covers that even in the abstract:

> Purpose-driven institutions [...] empower individuals to take intellectual risks and challenge the status quo.

(which of course includes and is most-often the institution itself.)

PaulHoule•1h ago
I'll argue otherwise.

Almost the defining problem of modern institutions is sweeping problems under the rug, be it be climate change or (most important) Habermas's "Legitimation Crisis" [1] It's something I've been watching happen at my Uni ever since I've had anything to do with it. The spectacle of institutions failing to defend themselves [2] turns people against them.

Insofar as any external threat topples an institution or even threatens it seriously there was a failure of homeostasis and boundaries from the very building.

[1] https://en.wikipedia.org/wiki/Legitimation_Crisis_(book)

[2] ... the king is still on the throne, the pound is still worth a pound ...

21asdffdsa12•1h ago
No, it does not. It preserves them as they where. Spreading of cultures that can not build institutions and uphold the rule of law does. To which stanford contributed considerately. The faction that can not build a working economic system and yearns to rule the economic system, is also incapable to build working societies, institutions and rule of law.
SecretDreams•1h ago
Did you read the article?
hoppyhoppy2•1h ago
>Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

https://news.ycombinator.com/newsguidelines.html

SecretDreams•57m ago
Noted, thanks.
alwayseasy•1h ago
It feels like you didn't read or even skim the full article and instead are just reacting to the title.
squeefers•54m ago
feels like you think social media is bad for other people but not you. every single one of you is posting on social media right now, whilst making the case its evil or a problem or bad or some negative descriptor. people who think its only bad for kids are quick to bring porn up, but that issue is itself an emotional reaction. remember when prior to the 1950s they said homosexuality was bad for mental health then once it became socially acceptable, there was suddenly "evidence" to the contrary.
uptownJimmy•1h ago
All scams are inherently destructive. Everything else is co-morbidity.
alwayseasy•1h ago
Note this is the asbtract, so please let's not debate the abstract...

The link to download the paper is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623

rpdillon•1h ago
I already debated this on HN when this was posted two days ago, but this paper is not peer-reviewed and is a draft. The examples it uses of DOGE and of the FDA using AI are not well researched or cited.

Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.

Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece.

bayindirh•1h ago
This is what drafts for. It's either a very rough draft with some errors and room for improvement, or a very bad draft sitting on the wrong foundation.

Either way, it's an effort, and at least the authors will learn to not to do.

yunohn•1h ago
No, it’s definitely not what drafts are for. Fundamental issues of the nature pointed out by the parent comment are way too serious to make it into a draft. Drafts are for minor fixes and changes, as per the usual meaning of the word draft.
tucnak•47m ago
Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc.

It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?

46493168•18m ago
Are there any particular points you want to refute?
randusername•44m ago
> Institutions like higher education, medecine, and law inform the stable and predictable patterns of behavior within organizations such as schools, hospitals, and courts., respectively,, thereby reducing chaos and friction.

Hard to take seriously with so many misspellings and duplicate punctuation.

I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains.

Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective.

wendgeabos•1h ago
The title makes me disinclined to read the whole paper. But I guess that's AI destroying institutions. Sigh. Social science != science.
wendgeabos•1h ago
I guess I'll get voted down for this -- oh well.
wavefunction•1h ago
complaining about being downvoted is discouraged here tomorrow is another day
PaulHoule•1h ago
Funny I find that "opinions like this always get downvoted" can suppress downvotes! On the other hand if you are overflowing with karma what is an occasional -11?
squeefers•52m ago
HN sell the comments to data brokers and get less money if it contains toxicity because they have to manually filter it out. thats why theyre hot on moderation now. where can i go to express non-happy thoughts these days?
PaulHoule•29m ago
Bluesky, X, ...

Toxic is toxic.

reedf1•1h ago
In other words: the penny finally dropped for lawyers. The next decade will be every knowledge field actively conspiring against AI.
FromTheFirstIn•1h ago
Ah yes, it’s not an earnest critique that the tech is destabilizing and isolating. It’s a conspiracy! Thank you. For a moment there I thought I’d have to examine my own beliefs!
philipallstar•1h ago
Sounds like you probably highly resist examining them.
FromTheFirstIn•58m ago
Dang maybe you’re right
reedf1•47m ago
This type of reflexive snark is just shite; I'm so bored of it. Things can be both earnest and compelled - right? I agree with you, and still hold my opinion.
bogzz•1h ago
Awesome.
Applejinx•57m ago
Where appropriate. For instance, the purpose of lawyers is to serve and propagate the law, as distinct from 'most people say'. Justice in general is meant, imperfectly, to strive for correct answers on the highest possible level, even and especially if new accepted case law serves to contradict what was put up with before.

So, web programmers could be going against AI on the grounds of self-preservation and be wholly justified in doing so, but lawyers are entitled to go after AI on more fundamental, irreconcilable differences. AI becomes a passive 'l'estat, cest moi' thing locking in whatever it's arrived at as a local maximum and refusing to introspect. This is antithetical to law.

DrScientist•23m ago
> For instance, the purpose of lawyers is to serve and propagate the law

But day to day, they spend a lot of their time selling boiler plate contracts and wills or trying to smuggle loopholes into verbose contracts, or trying to find said holes in said contracts presented by a third party[1]

Or if they are involved in criminal law, I suspect they spend most of their time sifting the evidence and looking for the best way to present it for their client - and in the age of digital discovery the volume of evidence is overwhelmning.

And in terms of delivering justice in a criminal case - isn't that the role of the jury ( if you are lucky enough still to have one ).

I suspect very few lawyers ever get involves in cases that lead to new precedents.

code51•12m ago
12 Angry Agents
diablozzq•1h ago
The immediate hot take is even if correct, the paper is largely opinion piece wrapped in an academic paper with Stanford logo.

Most of the sections had no citations and anecdotes.

Like many who have predicted doom or ChatGPT can never do XYZ, anecdotes do not build a substantive argument.

mahoho•18m ago
...What? Like half of the vertical length of the paper is footnotes with citations. Also, opinions are kind of an important part of how we get ideas.
rafaelbeirigo•1h ago
This is a byproduct of every revolution.
Forgeties79•1h ago
Most revolutions have historically resulted in a lot of death and little change for the better if any. Frequently the outcome is worse than before.
nisegami•1h ago
Forest fires are immensely destructive, but they clear the way for new growth in their wake. The same has been said for recessions and the economy, and I think there's at least some comparison to be made for revolutions and societies.
oblio•1h ago
Awesome, please post this from inside a forest fire and tell us your feelings then.
thatguy0900•1h ago
Nah this revolution the billionaires who control the Ai and automated means of production will voluntarily give their money to the little guy instead of needing widespread unrest and riots beforehand like the other times
toddmorey•1h ago
Right you are. Nature can be violent, but prefers gradual change. Abrupt change shocks ecosystems and always comes with unintended consequences.
bethekidyouwant•1h ago
What is your short list of changes that have resulted in life and betterment?

Are we only talking about technological revolutions here or are you talking about peasants uprising in China 1000 years ago?

PlatoIsADisease•1h ago
The Copernican Revolution (discovery earth was not at the center of the solar system) initially had worse empirical calculations because they didn't know planets traveled in ellipses.

The moments after the revolution might be worse, but in the long term, we got better.

Forgeties79•56m ago
We can pick the best and worst examples all day, but it’s not very productive IMO.
naasking•51m ago
It could be if it actually lets us calibrate our credence of your original claim that most revolutions have resulted in a lot of death for little benefit. If the worst examples are much worse than the best examples, or vice versa, then we can plausibly conclude whether you are at least directionally correct.
toddmorey•1h ago
I struggle with the thesis that our institutions haven't already been fatally wounded. Social media and endless content for passive consumption have already errored the free press, short-circuited decision-making, and isolated people from each other.
squeefers•1h ago
> Social media and endless content for passive consumption

neither being able to speak to someone on a computer nor videos on the internet are new, fancy web 10.0 frontend notwithstanding

> and isolated people from each other.

I assume you mean doomscrolling as opposed to the communication social media affords. because social media actually connects us (unless apparently its facebook, then messaging is actually bad)

harimau777•56m ago
I'm not sure what you mean. The internet itself is new let alone widespread access to video sharing.

Part of the problem is that social media isn't social media anymore. Its an algorithmic feed that only occassionally shows content from people you're friends with. If Facebook went back to its early days when it was actually a communication tool, then I don't think you would see the same complaints about it.

wat10000•32m ago
Most social media isn't about communication, it's about engagement bait. Most usage consists of popular accounts sending messages, then people writing replies that are never read by the original account, and some vapid argument or agreement among the replies. It essentially pretends to connect us while actually capturing our attention away from that connection.
littlemerman•1h ago
Title should be updated to “How AI Destroys Institutions” to match source
world2vec•48m ago
HN auto-removes "How" from the titles.
buellerbueller•1h ago
dang, this submission title should be amended.

JFC downvoters; when I posted this comment the title did not match the article title.

6DM•1h ago
I don't think AI is the cause, it's merely the mechanism that is speeding up what has already been happening.

Social media was already isolating people. It is being sped up by the use of AI bots (see dead internet theory). These bots are being used to create chaos in society for political purposes, but overall it's increasingly radicalizing people and as a result further isolating everyone.

AI isn't eroding college institutions, they were already becoming a money grab and a glorified jobs program. Interpersonal relationships (i.e. connections) are still present, I don't see how AI changes that in this scenario.

I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.

The article does highlight one thing that I do attribute to AI and that is the lack of critical thinking. People are thinking less with the use of AI. Instead of spending time evaluating, exploring and trying to think creatively. We are collectively offloading that to AI.

echelon•57m ago
100% correct in the first part, though I'd like to think there's a bimodal effect with AI users and usage.

Hard working expert users, leveraging AI as an exoskeleton and who carefully review the outputs, are getting way more done and are stronger humans. This is true with code, writing, and media.

People using AI as an easy button are becoming weaker. They're becoming less involved, less attentive, weaker critical thinkers.

I have to think that over some time span this is going to matter immensely. Expert AI users are going to displace non-AI users, and poor AI users are going to be filtered at the bottom. So long as these systems require humans, anyway.

Personally speaking:

My output in code has easily doubled. I carefully review everything and still write most stuff by hand. I'm a serious engineer who built and maintained billion dollar transaction volume systems. Distributed systems, active active, five+ nines SLA. I'm finding these tools immensely valuable.

My output in design is 100% net new. I wasn't able to do this before. Now I can spin up websites and marketing graphics. That's insane.

I made films and media the old fashioned way as a hobby. Now I'm making lots of it and constantly. It's 30x'd my output.

I'm also making 3D characters and rigging them for previz and as stand-ins. I could never do that before either.

I'm still not using LLMs to help my writing, but eventually I might. I do use it as a thesaurus occasionally or to look up better idioms on rare occasion.

nathan_compton•48m ago
I have observed this with students. Some use AI to really extend their capabilities and learn more, others become lazy and end up learning less than if they hadn't used AI.
nautilus12•54m ago
I don't think this argument makes much sense. If you are running down hill towards a cliff then saying that adding a cart to speed up the process doesn't give the cart moral blameworthiness is an unhelpful observation. You can still chose to stop running down the hill or to not get on the cart.
sodapopcan•48m ago
Exactly! Was going to make a similar comment if I didn't already see one. People keep saying things like this and drives me fuckin' nuts. It's not that there are no positives but I don't see how the positives outweigh the negatives.
greenavocado•52m ago
Dead internet theory original post: https://forum.agoraroad.com/index.php?threads/dead-internet-...
Angostura•44m ago
I rather disagree with this position.

To risk an analogy, if I throw petrol onto an already smouldering pile of leaves, I may mot have ‘caused’ the forest fire, but I have accelerated it so rapidly that the situation becomes unrecognisable.

There may already have been cracks in the edifice, but they were fixable. AI takes a wrecking ball to the whole structure

basilgohar•38m ago
I agree and disagree with parts of what you said.

AI may have caused a distinct trajectory of the problem, but the old system was already broken and collapsing. If the building falls over or collapses in place doesn't change that the building was already at its end.

I think the fact that AI is allowed to go as far as it has is part of the same issue, namely, our profit-at-all-costs methodology of late-stage capitalism. This has lead to the accelerated destruction of many institutions. AI is just one of those tools that lets us sink more and more resources into the grifting faster.

(Edit: Fixing typos.)

booleandilemma•31m ago
I agree with this. We've made existing problems 100x worse overnight. I just read the curl project is discontinuing bug bounties. We're losing so much with the rise of AI.
gosub100•19m ago
or, having a glass of wine with dinner or a few beers on the weekend is fine. but drinking a 6-pack per day or slamming shots every night is reckless and will lead to health consequences.
jrjeksjd8d•41m ago
Capitalism is destroying institutions. Any new technology must be employed in service of "number go up". In this system externalities have to be priced in with taxes, but it's cheaper to buy off legislators than to actually consider the externalities.

This is how we get food that has fewer nutrients but ships better, free next-day delivery of plastic trash from across the world that doesn't work, schools that exist to extract money rather than teach, social media that exists primarily to shove ads in your face and trick you into spending more time on it.

In the next 4 years we will see the end of the American experiment, as shareholder capitalism completely consumes itself and produces an economy that can only extort and exploit but not make anything of value.

jongjong•9m ago
It's not capitalism, it's the monetary system that's the problem. It's not a level playing field. Capitalism requires a fair monetary system as a precondition. Though I can agree that communism would be better than whatever perverse system we have now.
breppp•23m ago
I agree, actually AI might reverse some of these processes.

Universities have pushed post-modernism since the 60s which is the precursor for the deprecation of truth.

Later on when Google killed the press, truth had become subservient for easy money grabs, ending with a social media that is completely swamped with disinformation

However, an AI model with a decent alignment (and maybe government regulated alignment) might lead to decent narratives whose goal is not what we're seeing now with left and right populism, which is the destruction of the state

raw_anon_1111•18m ago
A government related alignment may lead to increased truth?? Have you been paying attention in the last year where the government is cleansing government websites of any facts that don’t support its narrative
hall0ween•16m ago
> Universities have pushed post-modernism since the 60s which is the precursor for the deprecation of truth.

Call me crazy, but the situation may be more nuanced than this (and your next statement). For example, all universities embraced post-modernism? Also, universities are the arbiter for truth? If so, which universities and which truths? Or is it the transcendental Truth all universities gave out? Lastly, post-modernist ideas on media or some other part of culture?

duskdozer•7m ago
What kind of left populism are you talking about, and how has it contributed to the destruction of the state?
palmotea•15m ago
> I don't think AI is the cause, it's merely the mechanism that is speeding up what has already been happening.

I think the technical term is "throwing gas on the fire." It's usually considered a really bad thing to do.

> I am not a fan of how AI is shaping our society, but I don't place blame on it for these instances. It is in my opinion that AI is speeding up these aspects.

If someone throws gas on a fire, you can totally blame them for the fire getting out of control. After all, they made it much worse! Like: "we used to have smouldering brush fire that we could put out, but since you dumped all that gas on it, now we will die because we have a forest fire raging all around us."

jongjong•10m ago
Yes of course AI is just a symptom. The cause is the fiat monetary system. In all history, no fiat monetary system has ever lasted. There have been hundreds. They always fail eventually and lead to the collapse of nations and empires.
IvanK_net•1h ago
Can anybody prove that the essay and the abstract was not written by the AI, using 20 to 50 words as a prompt?
Applejinx•52m ago
Of course not, it's expressing widely held observations that have been out there in the human population for a long time, and they're correct observations so they're hardly impossible to find.

It's not really a good argument to say 'but what if this argument is so right and so commonly held that an AI could regurgitate it?'. Well, yes, because AI is not inherently unable to repeat correct opinions. It's pretty trivial to get AI to go 'therefore, I suck! I should be banned'. What was it, Gemini, which took to doing that on its own due to presumably the training data and guidance being from abused and abusive humans?

throwaw12•1h ago
> Civic institutions - the rule of law, universities, and a free press - are the backbone of democratic life

It probably was in 1850-1950s, but not in the world I live today.

Press is not free - full of propaganda. I don't know any journalist today I can trust, I need to check their affiliations before reading the content, because they might be pushing the narrative of press owners or lobbies

Rule of law? don't make me laugh, this sounds so funny, look what happened in Venezuela, US couldn't take its oil, so it was heavily sanctioned for so many years, then it still couldn't resist the urge to steal it, and just took the head of the state.

Universities - do not want to say anything bad about universities, but recently they are also not good guys we can trust, remember Varsity Blues scandal? https://en.wikipedia.org/wiki/Varsity_Blues_scandal - is this the backbone of democratic life?

freejazz•52m ago
> Press is not free - full of propaganda

Did you think that was different from 1850-1950?

throwaw12•43m ago
I don't think, but I feel like situation was slightly better for some reasons:

* there were no internet, so local communities strived to inform things happening around more objectively. Later on, there were no need for local newspapers

* capitalism was on the rise and on its infancy, but families with a single person working could afford some of the things (e.g. house, car) hence there were no urgent need to selling out all your principles

* people relied on books to consume information, since books were difficult to publish and not easy to revert (like removing a blog post), people gave an attention to what they're producing in the form of books, hence consumers of those books were also slightly demanding in what to expect from other sources

* less power of lobby groups

* not too many super-rich / billionaires, who can just buy anything they want anytime, or ruin the careers of people going against them, hence people probably acted more freely.

But again, can't tell exactly what happened at that time, but in my time press is not free. That's why I said "probably"

ClarityJones•30m ago
I would disagree about capitalism being on the rise. Marx and his views grew after the 1850s and communist / socialist revolutions spread throughout Europe. There may have been more discussion of "capitalism" and an increase in industrialization, but "capital" had existed and operated for centuries before that. What changed was who owned the capital and how it was managed, specifically there has been a vast increase in central / government control.

I think this centralization of authority over capital is what has allowed for the power of lobbying, etc. A billionaire could previously only control his farms, tenant farmers, etc. Now their reach is international, and they can influence the taxing / spending the occurs across the entire economy.

Similarly, local communities were probably equally (likely far more) mislead by propaganda / lies. However, that influence tended to be more local and aligned with their own interests. The town paper may be full of lies, but the company that owned the town and the workers that lived there both wanted the town to succeed.

biophysboy•27m ago
I think the 1876 election in the USA is an interesting case that counters this view.
marginalia_nu•9m ago
> * not too many super-rich / billionaires, who can just buy anything they want anytime, or ruin the careers of people going against them, hence people probably acted more freely.

The provided timespan encompasses the 'gilded age' era, which saw some ridiculous wealth accumulation. Like J.P. Morgan personally bailed out as the US Treasury at one point.

zdc1•37m ago
My (non-authoritative) understanding was that after Vietnam there was a more recognised need to control what the media published, resulting in Operation Mockingbird and such. However, given how centralised the media has always been, I could see it being influenced before this.

Did you have any examples or reading to share?

YetAnotherNick•36m ago
Yes, at the very least there wasn't strong polarization, so the return on propaganda content is lower. Now a newspaper risk losing their consumer more if they publish anything contrarian.

[1]: https://www.vox.com/2015/4/23/8485443/polarization-congress-...

b40d-48b2-979e•33m ago

    if they publish anything contrarian
Publishing something to the contrary of popular belief is not being contrarian. It is not a virtue to be contrarian and forcing a dichotomy for the sake of arguing with people.
biophysboy•40m ago
The alternative to all of these institutions is currently social media, which is worse by any metric: accuracy, fairness, curiosity, etc.

I am more optimistic about AI than this post simply because I think it is a better substitute than social media. In some ways, I think AI and institutions are symbiotic

techblueberry•37m ago
Paradoxically, these institutions are probably the best they've ever been. We trusted them more 100 years ago because we didn't know better, but we're now letting perfect be the the enemy of good. Wise men once said:

"In prison, I learned that everything in this world, including money, operates not on reality..."

"But the perception of reality..."

Our distrust of institutions is a prison of our own making.

mbesto•31m ago
If this is all true (I don't disagree) than what is or should be the backbone of democratic life?
wongarsu•17m ago
They are (part of) the backbone of democratic life. But democratic life hasn't been doing well in the US in the last decades. The broken backbone is both cause and symptom of this in a vicious cycle
raw_anon_1111•12m ago
The press has never been believable. How many innocent people were beat, framed and shot and the press just took the word of the police? Rappers in the 80s were talking about police brutality. But no one believed them until the Rodney King video in 1992. Now many don’t instinctively trust the police because everyone has a camera in their pocket and publish video on social media.

On the other side of the coin, the press and both parties ignored what was going on in rural America until the rise of Trump

jvanderbot•1h ago
It's hard for me to argue with these few direct sentences.

    They delegitimize knowledge, inhibit cognitive development, short circuit
    decision-making processes, and isolate humans by displacing or degrading human connection.
    The result is that deploying AI systems within institutions 
    immediately gives that institution a half-life.
... even if we don't have a ton of "historical" evidence for AI doing this, the initial statement rings true.

e.g., an LLM-equipped novice becomes just enough of an expert to tromp around knocking down chesterton's fences in an established system of any kind. "First principles" reasoning combined with a surface understanding of a system (stated vs actual purpose/methods), is particularly dangerous for deep understanding and collaboration. Everyone has an LLM on their shoulder now.

It's obviously not always true, but without discipline, what they state does seem inevitable.

The statement that AI is tearing down institutions might be right, but certainly institutions face a ton of threats.

rpdillon•48m ago
The examples that the paper cites that are historical are not compelling, in my opinion.

The authors use Elon Musk's DOGE as an example of how AI is destructive, but I would point out that that instance was an anomaly, historically, and that the use of AI was the least notable thing about it. It's much more notable that the richest man in the world curried favor by donating tens of millions of dollars to a sitting US president and then was given unrestricted access to the government as a result. AI doesn't even really enter the conversation.

The other example they give is of the FDA, but they barely have researched it and their citations are pop news articles, rather than any sort of deeper analysis. Those articles are based on anonymous sources that are no longer at the agency and directly conflict with other information I could find about the use of that AI at the FDA. The particular AI they mention is used for product recalls and they present no evidence that it has somehow destroyed the FDA.

In other words, while the premise of the paper may seem intellectually attractive, the more I have tried to validate their reasoning and methodology, the more I've come up empty.

OgsyedIE•58m ago
The natural process of creative destruction deterritorializes everything, that's how every advance in history has always came about. The apparent difference perceived here is merely that this time, the human capital in the institutions that's supposed to reinvent the institutions alongside each new development are struggling.

Coincidentally, this has happened exactly when the Flynn effect reverted, the loneliness epidemic worsened, the academics started getting outnumbered by the deans and deanlings and the average EROI of new coal, oil and gas extraction projects fell below 10:1. Sure, we should be wary of the loss to analysis if we just reduce everything to an omnicause blob, but the human capital decline wouldn't be there without it.

blfr•57m ago
I am skeptical of hypotheses like this when the deterioration has begun before its supposed cause. This is how I look at social media or tinder being blamed for loneliness or low fertility. While they may have exacerbated the issues, trends have been unfavorable for decades if not centuries before.

Similarly, it seems to me like the rule of law (and the separation of powers), prestige press, and universities are social technologies that have been showing more and more vulnerabilities which are actively exploited in the wild with increasing frequency.

For example, it used to be that rulings like Wickard v. Filburn were rare. Nowadays, various parties, not just in the US, seem to be running all out assaults in their favoured direction through the court system.

johndhi•56m ago
Sorry to be this person, but I don't really agree with the first sentence:

--> "Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life."

People are the backbone of our civilization. People who have good intentions and support one another. We don't NEED an FDA to function -- it's just a tool that has worked quite well for a long time for us.

AnimalMuppet•52m ago
The problem is that not all people have good intentions. That's why we actually need the FDA, or something like it.
johndhi•35m ago
There are a lot of tools to address the problem that some people have bad intentions.

We publish common sense laws, and we have police officers and prosecutors, and then we have a court system to hold people accountable for breaking the law. That's one pretty major method that has little to do with the need for an institution like FDA.

I don't know if a system that relied entirely on tort and negligence and contract law to protect people from being sold snake oil would function better or worse than FDA, but I do know something like FDA (where a bunch of smart people advise very specifically on which drugs are ok to take and which are not) isn't the only option we have.

rapatel0•55m ago
More social science academic nonsense.

Fun quotes from the paper > I. Institutions Are Society’s Superheroes: Institutions are essential for structuring complex human interactions and enabling stable, just, and prosperous societies.

> Institutions like higher education, medecine, and law inform the stable and predictable patterns of behavior within organizations such as schools, hospitals, and courts., respectively,, thereby reducing chaos and friction.

>Similarly, journalism, as an institution, commits to truth-telling as a common purpose and performs that function through fact-checking and other organizational roles and structures. Newspapers or other media sources lose legitimacy when they fail to publish errata or publish lies as news.

> Attending physicians and hospital administrators may each individually possess specific knowledge, but it is together, within the practices and purposive work of hospitals, and through delegation, deference, and persistent reinforcement of evaluative practices, that they accomplish the purpose of the institution

> The second affordance of institutional doom is that AI systems short- circuit institutional decisionmaking by delegating important moral choices to AI developers.

>Admittedly, our institutions have been fragile and ineffective for some time.36 Slow and expensive institutions frustrate people and weaken societal trust and legitimacy.37 Fixes are necessary.

> The so-called U.S. “Department of Government Efficiency” (“DOGE”) will be a textbook example of how the affordances of AI lead to institutional rot. DOGE used AI to surveil government employees, target immigrants, and combine and analyze federal data that had, up to that point, intentionally been kept separate for privacy and due process purposes.

It's all politics. 150% bullshit.

gbanfalvi•53m ago
I think the paper is really good and makes loads of valid points... and it's kinda terrifying.

Having super accessible machines that can make anything up and aren't held accountable run the world is going to break so many systems where truth matters.

pizzafeelsright•42m ago
> Having super accessible machines

Having large bureaucratic organizations

> that can make anything up and aren't held accountable

that run everything and aren't held accountable

> run the world is going to break so many systems

run the world breaking up families, freedom, and fun

> where truth matters.

where truth is determined by policy

gbanfalvi•11m ago
> Having large bureaucratic organizations

Yes, that's what the paper argues. Institutions at every scale (say, doctor's clinics, hospitals, entire healthcare systems) are very challenging to access compared to me asking ChatGPT. And not just bureaucracy, but there's time, money and many other intangible costs associated with interacting with institutions.

> [Large bureaucratic organizations]that run everything and aren't held accountable

But they ultimately are. People from all types of institutions are fired and systems are constantly reorganized and optimized all the time. Not necessarily for the better -- but physical people are not black boxes spewing tokens.

Individuals' choices are ultimately a product of their knowledge and their incentives. An MLM's output is the result of literal randomness.

> run the world breaking up families, freedom, and fun

There's lots of terrible institutions vulnerable to corruption and with fucked up policies, but inserting a black box into _can't_ improve these.

> where truth is determined by policy

The truth is the truth. Regardless of what policy says. The question is, do you want to be able to have someone to hold accountable or just "¯\_(ツ)_/¯ hey the algorithm told me that you're not eligible for healthcare"

zinodaur•51m ago
I firmly believe "AI will be used by the psycho weasel billionaires to torture us all", but this article is weak. It seems like it was written by people too scared of AI to use it. They have a couple of points you realize in your first month of using AI, and they wrap them in paragraphs of waffle. I wish the anti AI camp was more competent
onaclov2000•51m ago
I think that as communities spread (as with increased joining of online communities) we lost much in our local communities, this feels to me like a sort of extension of that, it's a community of one (well kinda, it's a sum of communities, and kinda missing the bidirectional communication, of the community) maybe?
bigbug123•51m ago
## Summary: "How AI Destroys Institutions"

*Core Thesis:* AI systems' fundamental design features degrade and will eventually destroy civic institutions essential to democratic life.

*Three Destructive Affordances of AI:*

1. *Undermines Expertise* - Encourages cognitive offloading, leading to skill atrophy - Creates illusion of accuracy while producing inevitable "hallucinations" - Backward-looking nature cannot adapt to changing circumstances - Displaces knowledge transfer between humans

2. *Short-Circuits Decision-Making* - Outsources moral choices to machines, obscuring accountability - Flattens institutional hierarchies needed for oversight - Removes critical points of reflection and contestation - Incapable of intellectual risk-taking or challenging status quo

3. *Isolates Humans* - Displaces opportunities for interpersonal connection - Sycophantic design erodes capacity for managing social friction - Depletes social capital and solidarity institutions require

*Institutions at Risk:*

- *Rule of Law:* AI decisions lack transparency, predictability, and accountability required for legitimate governance - *Higher Education:* Offloads learning, homogenizes output, undermines trust between students and educators - *Journalism:* AI "slop" pollutes information ecosystem; press cannot fulfill watchdog function - *Democracy:* Erodes social capital, generalized reciprocity, and civic participation

*Key Evidence Cited:* - Studies showing AI use inhibits critical thinking and problem-solving skills - DOGE's use of AI to surveil employees and bypass institutional safeguards - FDA's "Elsa" system hallucinating nonexistent studies - Research on "Model Autophagy Disease" degrading AI accuracy

*Conclusion:* Without rules mitigating AI's spread, institutional dissolution is inevitable. Authors call for bright-line prohibitions rather than half-measures like ethics principles or consent frameworks.

naasking•44m ago
> Encourages cognitive offloading, leading to skill atrophy

Speculative, we don't have that evidence yet. The evidence robust we do have is in students who have not yet developed skills and maturity, and who also have a lot of other confounding factors in their development, eg. social media, phones, formative development years exposed to immediate pleasure-seeking feedback loops.

> Backward-looking nature cannot adapt to changing circumstances

As opposed the nimble flexibility that established institutions are historically known for?

> Displaces knowledge transfer between humans

If people use LLMs, presumably it's because knowledge transfer is faster and more convenient that way than via direct interaction.

> Flattens institutional hierarchies needed for oversight

We have hierarchical institutions for oversight because flatter institutions don't scale (because humans don't scale). If AI can scale and can provide the same transparency, accountability and oversight, how is that not an improvement?

I could go on with the remaining points, but suffice it to say that there are a lot of faulty assertions behind the paper's arguments. It's also interesting that every chicken little saying that the sky is falling immediately reaches for the ban hammer instead of providing constructive criticism on how AI (or whatever innovation) can improve to mitigate these issues.

notjes•51m ago
"Civic institutions—the rule of law, universities, and a free press" All of those are rotten and corrupted to the core. But I don't think they will be destroyed by AI. People are voting with their feet.
embedding-shape•48m ago
This has had a large discussion already just five days ago, with ~160 comments: https://news.ycombinator.com/item?id=46644779
FuturisticLover•47m ago
"Isolating people from each other"

I can see this happening. Earlier, more people worked in groups because they relied on their expertise.

Now, there is no need for this; people can do it alone. Even though this makes the work done, it comes at the cost of isolation.

I am sure for some people this would look like a win.

littlecranky67•35m ago
> I can see this happening. Earlier, more people worked in groups because they relied on their expertise.

It only isolates you, if you let it isolate you. The pandemic shifted my life, as I have been working alone at home ever since. I am single no kids, and after the pandemic ended I continue to stay "isolated". I knew about that dangers and took active measures - some of which were only possible because I was no longer required to go to an office. I moved to another country, to a location with a lot of international expats that work online, too. I built an active social circle, attending meetups, sport groups, bar nights etc.

I am now more social and happier than ever, because my daily social interactions are not based on my work or profession, and I get to chose with whom I spend my time and meet for lunches. Before, the chores around hour long commutes, grooming, packing my bag, meal-prep, dressing properly etc. just to sit in the office all day - all are gone from my schedule. I have more free time to socialize and maintain friendships, pay less rent, and in general - due to lower cost of living - life quality has improved significantly.

Without work-from-home this would not be possible. You could argue WFH results in isolation and depression, but for me it was liberating. It is, of course, each individuals own responsibility (and requires acitve work, sometimes hard work, too) that will influence the outcome.

wiseowise•30m ago
Which country is that? If it’s not a secret.
fabian4•45m ago
AI can change how we work and think, but institutions didn't become fragile overnight because of it. Many of the pressures on universities, media, and governance predate AI by years or decades.

Using AI wisely can augment human capability without eroding institutional roles — the real question is how accountability, transparency, and critical thinking evolve alongside the technology.

wat10000•32m ago
When has humanity ever used technology wisely?
ertucetin•43m ago
Recently wrote a blog about this: https://ertu.dev/posts/ai-is-killing-our-online-interaction/
Sol-•41m ago
Do they assume that the current state of our institutions is normatively correct? AI progress will come and have manifold benefits, therefore we shouldn't really restrict it too much.

If the institutions cannot handle that, they will have to change or be destroyed. Take university, for instance. Perhaps they will go away - but is this a great loss? Learning (in case it will remain relevant) can be more efficiently achieved with personal AI assistants for each student.

techblueberry•40m ago
I think one of the paradoxes of modern life, is I assert that one of the things we're all nostalgic for are institutions. For sure we all believe in different institutions, but watching the decline of institutions, we all seem to be dancing on our own graves.
fullshark•18m ago
Sure I'm nostalgic for my own childish naivety, aren't we all?
reactordev•40m ago
I’m going to play devils advocate and cough recite a common argument from the pro gun Americans.

“It’s not guns that kill people, it’s people that kill people”.

It’s not “AI bad”, it’s about the people who train and deploy AI.

My agents always look for research material first - won’t make stuff up. I’d rather it say “I can’t determine” than make stuff up.

AI companies don’t care about institutions or civil law. They scraped the internet, copyright be damned. They indexed art and music and pay no royalties. If anything, the failure of protecting ourselves from ourselves is our fault.

baggachipz•27m ago
And the vehicle to protect ourselves is called "government". Looking around... yeah.
wrqvrwvq•15m ago
The common argument seems to hold up in both cases.
alphazard•38m ago
It's not AI. Human institutions rely on people acting in good faith, almost every institution is held together by some kind of priesthood, where everyone assumes the priests know what they are doing, and the priests create a kind of seriousness around the topic to signal legitimacy, and enforce norms on the other priests. This is true of Government, Law, Science, Finance, Medicine, etc.

But most of these institutions predate the existence of game theory, and it didn't occur to anyone how much they could be manipulated since they were not rigorously designed to be resistant to manipulation. Slowly, people stopped treating them like a child's tower of blocks that they didn't want to knock over. They started treating them like a load bearing structure, and they are crumbling.

Just as an example, the recent ICE deportation campaign is a direct reaction to a political party Sybil[0] attacking the US democracy. No one who worked on the constitution was thinking about that as a possibility, but most software engineers in 2026 have at least heard the term.

[0] https://en.wikipedia.org/wiki/Sybil_attack

noosphr•37m ago
Why isn't there a major lack of institutional trust of dentists? Between 1990 to today fillings have gone from being torture to something that takes 30 minutes while I listen to a podcast. I've not met anyone who distrusts big dental. But fluoridated water is still a hot topic.

The best that the experts the paper talks about can do today is say that if we follow their advice our lives will get worse slower. Not better. Just as bad as if we don't listen to them, but more slowly.

In the post war period people trusted institutions because life was getting better. Anyone could think back to 1920 and remember how they didn't have running water and how much a bucket weighed when walking up hill.

If big institutions want trust they should make peoples lives better again instead of working for special interests, be they ideological or monetary.

jjice•29m ago
> Why isn't there a major lack of institutional trust of dentists?

FWIW, I know a lot of people who refuse to go to the dentist unless it's an issue because they're one of the medical professions that seem to do the most upselling.

I go every six months for a cleaning and trust my dentist, but I can definitely see how these huge chain dentists become untrustworthy.

mellosouls•36m ago
Note the author here is presumably not so concerned about the decline of the institutions given his flag-waving for censorship in previous years, eg:

"Banning Trump from Twitter and Facebook isn’t nearly enough"

https://www.latimes.com/opinion/story/2021-01-15/facebook-tw...

Even if you dislike Trump, the campaign to suppress conservative voices of the 2010s (now largely reversed) that he argues for there was a significant contribution to the decline in authority and respect that academia has had in the eyes of the general populace.

To be clear, the same must also apply to any suppression of liberal voices - its unacceptable in a culture that claims free speech.

But I am sceptical that this particular writer has a moral high ground from which to opine.

loudmax•31m ago
AI might be accelerating the trend, but there's been a populist revolt against institutions for over a decade. It's been happening long before ChatGPT, and this isn't just in Europe and the US. The erosion of trust in governments and institutions has been documented globally.

The obvious culprits being smartphones and social networking, though it's really hard to prove causality.

srijith259•27m ago
I understand that everyone is disillusioned with current institutions. But I don't understand the prevailing sentiment here that it's therefore okay for them to fail. For one, there will never be perfect institutions. One would think we would create technology to complement and improve them. Instead, recent technological trends seem to have done the opposite.

At the very least, this should make us reconsider what we are building and the incentives behind it.

book_mike•20m ago
My word, it doesn't have to be that way.
mistivia•19m ago
We have seen similar situations countless times before. Just as the Internet allowed people to bypass publishers to release text, and YouTube allowed creators to bypass TV stations and cinemas to release video, AI now allows people to bypass lawyers to read legal texts, which are often deliberately written to be indecipherable to an average person. AI will not destroy institutions, but it will pose challenges and lead to restructuring.
code51•13m ago
I know it's a draft but why on earth people cannot use page formats the same (more or less)?

Material (full page) material material, sources at the end. Simple. Readable.

Material (half page), sources (half page). Material/source. Material/source. Looks quite unreadable to the eye.

dzink•11m ago
Think back to Maya history when the rulers kept astronomy knowledge secret to pretend they were Gods and had control over celestial objects. If expensive education and publishing access provides power to someone and free education and publishing becomes a thread to their authority, that’s not a good testimony on how they used their education advantage while they had it.

AI may be destroying truth by creating collective schizophrenia and backing different people’s delusions, or by being trained to create rage-bait for clicks, or by introducing vulnerabilities on critical software and hardware infrastructure. But if institutions feel threatened, their best bet is to become higher levels of abstraction, or to dig deeper into where they are truly deeply needed - providing transparency and research into all angles and weaknesses and abuses of AI models. Then surfacing how to make education more reliably scalable if AI is used.

avsteele•4m ago
Obviously ~nobody has read this yet... But I did have a question based on the opening:

"If you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploing [sic] AI systems to hollow out public institutions with an astonishing alacrity"

So in the first two sentences we have hyperbole and typos? Hardly seems like high-quality academic output. It reads more like a blog post.