frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Just the Browser

https://justthebrowser.com/
27•cl3misch•43m ago•0 comments

OpenBSD-current now runs as guest under Apple Hypervisor

https://www.undeadly.org/cgi?action=article;sid=20260115203619
301•gpi•9h ago•31 comments

List of individual trees

https://en.wikipedia.org/wiki/List_of_individual_trees
177•wilson090•12h ago•62 comments

The spectrum of isolation: From bare metal to WebAssembly

https://buildsoftwaresystems.com/post/guide-to-execution-environments/
39•ThierryBuilds•3h ago•15 comments

Cue Does It All, but Can It Literate?

https://xlii.space/cue/cue-does-it-all-but-can-it-literate/
27•xlii•3d ago•4 comments

Apple is fighting for TSMC capacity as Nvidia takes center stage

https://www.culpium.com/p/exclusiveapple-is-fighting-for-tsmc
705•speckx•21h ago•427 comments

Pocket TTS: A high quality TTS that gives your CPU a voice

https://kyutai.org/blog/2026-01-13-pocket-tts
479•pain_perdu•1d ago•111 comments

Interactive eBPF

https://ebpf.party/
51•samuel246•4h ago•4 comments

Briar keeps Iran connected via Bluetooth and Wi-Fi when the internet goes dark

https://briarproject.org/manual/fa/
405•us321•17h ago•224 comments

pf: Make af-to less magical

https://undeadly.org/cgi?action=article;sid=20260116085115
24•defrost•3h ago•1 comments

Inside The Internet Archive's Infrastructure

https://hackernoon.com/the-long-now-of-the-web-inside-the-internet-archives-fight-against-forgetting
367•dvrp•2d ago•93 comments

Linux boxes via SSH: suspended when disconected

https://shellbox.dev/
232•messh•16h ago•133 comments

Bringing the Predators to Life in MAME

https://lysiwyg.mataroa.blog/blog/bringing-the-predators-to-life-in-mame/
30•msephton•2d ago•5 comments

Ask HN: How can we solve the loneliness epidemic?

619•publicdebates•19h ago•976 comments

My Gripes with Prolog

https://buttondown.com/hillelwayne/archive/my-gripes-with-prolog/
111•azhenley•12h ago•55 comments

Claude is good at assembling blocks, but still falls apart at creating them

https://www.approachwithalacrity.com/claude-ne/
264•bblcla•1d ago•191 comments

Primecoin and Cunningham Prime Chains

https://www.johndcook.com/blog/2026/01/10/prime-chains/
23•ibobev•4d ago•7 comments

On Being a Human Being in the Time of Collapse (2022) [pdf]

https://web.cs.ucdavis.edu/~rogaway/papers/crisis/crisis.pdf
115•barishnamazov•2h ago•92 comments

Altaid 8800

https://sunrise-ev.com/8080.htm
3•exvi•4d ago•0 comments

Data is the only moat

https://frontierai.substack.com/p/data-is-your-only-moat
163•cgwu•17h ago•32 comments

I Built a 1 Petabyte Server from Scratch [video]

https://www.youtube.com/watch?v=vVI7atoAeoo
94•zdw•5d ago•30 comments

Show HN: OpenWork – An open-source alternative to Claude Cowork

https://github.com/different-ai/openwork
196•ben_talent•2d ago•41 comments

JuiceFS is a distributed POSIX file system built on top of Redis and S3

https://github.com/juicedata/juicefs
156•tosh•18h ago•93 comments

Show HN: pgwire-replication - pure rust client for Postgres CDC

https://github.com/vnvo/pgwire-replication
7•sacs0ni•5d ago•3 comments

Go-legacy-winxp: Compile Golang 1.24 code for Windows XP

https://github.com/syncguy/go-legacy-winxp/tree/winxp-compat
119•Oxodao•3d ago•57 comments

All 23-Bit Still Lifes Are Glider Constructible

https://mvr.github.io/posts/xs23.html
104•HeliumHydride•12h ago•10 comments

Signal creator Moxie Marlinspike wants to do for AI what he did for messaging

https://arstechnica.com/security/2026/01/signal-creator-moxie-marlinspike-wants-to-do-for-ai-what...
9•aarghh•1h ago•1 comments

Show HN: BGP Scout – BGP Network Browser

https://bgpscout.io/
20•hivedc•11h ago•8 comments

First impressions of Claude Cowork

https://simonw.substack.com/p/first-impressions-of-claude-cowork
204•stosssik•2d ago•115 comments

CVEs affecting the Svelte ecosystem

https://svelte.dev/blog/cves-affecting-the-svelte-ecosystem
165•tobr•18h ago•28 comments
Open in hackernews

AI Destroys Institutions

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623
65•sean_the_geek•2h ago

Comments

sean_the_geek•2h ago
A thought provoking essay on impact of AI systems civic institutions.
chrisjj•2h ago
Dupe of https://news.ycombinator.com/item?id=46622870
embedding-shape•2h ago
Not really a dupe if no one really discussed it. And I'm glad, didn't see it that previous time :)
emsign•2h ago
> The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other.

This affordability is HEAVILY subsidized by billionaires who want to destroy institutions for selfish and ideological reasons.

DarkNova6•1h ago
This is literally corporate textbook 101. Subsidize your product, become market leader, cause lock-in and make your customers dependant.

Every large enough corporate wants to become the new Oracle.

ahartmetz•1h ago
Given how nobody properly understands LLMs, I doubt that they are intentionally designed like that. But the effect... yeah. I can see that happening.

(By the way, are you confusing affordance, the UX concept, with affordability?)

esperent•1h ago
Nobody properly understands dog brains either and yet you can still train a dog to sit.
ahartmetz•1h ago
If you just met a dog for the first time, you can't :) - my guess is LLMs are somewhere in between. It would be cool to see what happens if somebody tried to make an LLM that somehow has ethical principles (instead of guardrails) and is much less eager to please.
tyingq•1h ago
You can intentionally market the use cases without knowing exactly how they work, though. So it's intentional investment and use case targeting, rather than directly designing for purpose. Though, the market also drives the measures...so they iteratively get better at things you pour money into.
wesammikhail•1h ago
The institutions have been doing a fine job of destroying all their credibility and utility all on their own for far longer than this new AI hype cycle.

ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.

I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.

layer8•27m ago
I think you have misread the word “affordances”. It’s not about affordability [0]. The main text also explains what it means.

[0] https://en.wikipedia.org/wiki/Affordance

rfv6723•1h ago
This dire warning against AI echoes the anxieties of a much earlier elite: the late-medieval clergy facing the invention of the printing press. For centuries, they held a privileged monopoly on knowledge, controlling its interpretation and dissemination. The printing press threatened to shatter that authority by democratizing access to information and empowering individuals.

Similarly, today's critics, often from within the very institutions they defend, frame AI as a threat to "expertise" and "civic life" when in reality, they fear it as a threat to their own status as the sole arbiters of truth. Their resistance is less a principled defense of democracy and more a desperate attempt to protect a crumbling monopoly on knowledge.

embedding-shape•1h ago
If what you say was true, why are people from not within those institutions also try to warn others about the potential downfall of "expertise" and "civic life"? Are they just misinformed? Paid by these "institutional defenders" or what is your hypothesis?
rfv6723•1h ago
The alarm isn't coming from outside the institutions; it's coming from a wider, more modern clergy. The new priestly class isn't defined by a specific building, but by a shared claim to the mastery of complex symbolic knowledge.

The linguists who call AI a "stochastic parrot" are the perfect example. Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory. They are watching their entire intellectual paradigm—their very claim to authority—be rendered obsolete.

This isn't a grassroots movement. It's an immune response from the cognitive elite, desperately trying to delegitimize a technology that threatens to trivialize their expertise. They aren't defending society; they're defending their status.

raincole•1h ago
> Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory.

It's some wild claim. Every linguist worth their salt had known that you don't need grammatical theory to reach native level. Grammar being descriptive rather than prescriptive is the mainstream idea and had been long before LLM.

If you actually ask them, I bet most linguists will say they are not even excellent English (or whichever language they studied the most) teachers.

Plus, "stochastic parrot" was coined before ChatGPT. If linguists really felt that threatened by the time when people's concerns over AI was like "sure it can beat go master but how about league of legends?" you have to admit they did have some special insights, right?

rfv6723•1h ago
You've mistaken the battlefield. This isn't about descriptive grammar. It's about the decades-long dominance of Chomsky's entire philosophy of language.

His central argument has always been that language is too complex and nuanced to be learned simply from exposure. Therefore, he concluded, humans must possess an innate, pre-wired "language organ"—a Universal Grammar.

LLMs are a spectacular demolition of that premise. They prove that with a vast enough dataset, complex linguistic structure can be mastered through statistical pattern recognition alone.

The panic from Chomsky and his acolytes isn't that of a humble linguist. It is the fury of a high priest watching a machine commit the ultimate heresy: achieving linguistic mastery without needing his innate, god-given grammar.

raincole•1h ago
> LLMs are a spectacular demolition of that premise.

It really isn't. While I personally think the Universal Grammar theory is flawed (or at least Chomsky's presentation is flawed), LLM doesn't debunk it.

Right now we have machines that recognized faces better than humans. But it doesn't mean humans do not have some innate biological "hardware" for facial recognition that machines don't possess. The machines simply outperform the biological hardware with their own different approach.

Also, I highly recommend you express your ideas with your own words instead of letting an LLM present them. It's painfully obvious.

adrian_b•1h ago
I do not see how it can be claimed that "LLMs are a spectacular demolition of that premise", because LLMs must be trained on an amount of text far greater than that to what a human is exposed.

I have learned one foreign language just by being exposed to it almost daily, by watching movies spoken in that language, without using any additional means, like a dictionary or a grammar (because none were available where I lived; this was before the Internet). However, I have been helped in guessing the meaning of the words and the grammar of the language, not only by seeing what the characters of the movie were doing, correlated to the spoken phrases, but also by the fact that I knew a couple of languages that had many similarities with the language of the movies that I was watching.

In any case, the amount of the spoken language to which I had been exposed for a year or so, until becoming fluent in it, had been many orders of magnitudes less than what is used by a LLM for training.

I do not know whether any innate knowledge of some grammar was involved, but certainly the knowledge of the grammar of other languages had helped tremendously in reducing the need for being exposed to greater amounts of text, because after seeing only a few examples I could guess the generally-applicable grammar rules.

There is no doubt that the way by which a LLM learns is much dumber than how a human learns, which is why this must be compensated by a much bigger amount of training data.

Seeing how the current inefficiency of LLM training has already caused serious problems for a great number of people, who either had to give up on buying various kinds of electronic devices or they had to accept to buy devices of a much worse quality than previously desired and planned, because the prices for DRAM modules and for big SSDs have skyrocketed, due to the hoarding of memory devices by the rich who hope to become richer by using LLMs, I believe that it has been proven beyond doubt that the way how LLMs learn, for now, is not good enough and it is certainly not a positive achievement, as more people have been hurt by it than the people who have benefited from it.

intended•1h ago
The first weakness of your claim is that it is inherently one of the elite.

You read the works of the cognitive elite, when they support AI. When most people sing its praises, it’s from the highest echelons of white collar work priesthood.

AI is fundamentally a tool of the cognitively trained, and shows its greatest capability in the hands of those capable of assessing its output as accurate at a glance. The more complex the realm, the deeper the expertise to find value in it.

Secondly, linguists are not the sole group espousing the concerns with these tools. I’ve seen rando streamers and normal folk in WhatsApp groups, completely disconnected from the AI elite hating what is being wrought. Students and young adults outright wonder if they will have any worthwhile economic future.

Perhaps it is not a “movement”, but there is an all pervasive fear and concern in the population when it comes to AI.

Finally, position is eerily similar to the dismissal of concerns from mid level and factory floor job workers in the 80s and 90s. It was forgivable given the then prevalent belief that people would be retrained and reabsorbed into equivalently sustaining roles in other new industries.

raincole•1h ago
> Are they just misinformed?

Not all of them, but given the same questionable or outright false assumptions (e.g. AI companies are doing interference at a loss, the exaggerated water consumption number, etc) keeping getting repeated on YouTube, Reddit and even HN where the user base is far more tech-savvy than the population, I think misinformation is the primary reason.

terminalshort•1h ago
In most cases those people are members of the upper class who hold credentials issued by those institutions, and often are in professions protected by state enforced cartels where the ticket for entry is one of said credentials.
embedding-shape•1h ago
> In most cases those people are members of the upper class who hold credentials issued by those institutions

Right, but in my comment I'm explicitly asking about the ones that don't have any relation yet seem to defend it anyways? "Don't people don't actually exists" isn't really an argument...

boelboel•1h ago
So you're saying codemonkeys are mad they don't get seen as the 'cool guys', we have to kill the jobs 'cool guys' have. The codemonkeys will never be cool, just accept it, there's no way to fix it. These cool guys will for the most part be 'cool' even if you take away their jobs right now.
phoe-krk•1h ago
> a desperate attempt to protect a crumbling monopoly on knowledge

More like a war on the traditional, human-based knowledge, leveraged by people who believe that via coveting the world's supply of RAM, SSDs, GPUs, and what not, can achieve their own monopoly on knowledge under the pretense of liberating it. Note that running your own LLM becomes impossible if you can no longer afford the hardware to run it on.

wartywhoa23•1h ago
Surely we'll all beat monopolies by running our own local LLMs, storing whole blockchains on our local storage, building our own atomic power plants, flying our own airlines and launching our own satellites via our own rocket fleets. And producing our own trillion-transistor silicon in our own fabs.

We just have to start printing our own money and buying us some pocket armies and puppet politicians first.

hojofpodge•1h ago
The current bubble's effect on hardware is alarming but if they think they are going to create a permanent economic manipulation they are deluded. The US' hold on controls is eroding at a faster rate and China will be making good enough all the faster if its price/spec ratio is absurdly high.

Crypto currency makers can have artificial limits but no amount of limiting gpt-next access is cutting access to good enough.

terminalshort•1h ago
Better that I'm forced to rent an LLM from a tech monopolist for a few dollars than be forced to hire a member of the lawyers cartel for $500 an hour.
intended•1h ago
Come now. You mean the highly regulated, more competitive world of law? That too, as it is practiced in America? The once capital of economic competition?

That “cartel”?

Vs the leaders of an industry that built their tools through insane amounts of copyright infringement, and have forced the coining of “enshittification” to describe all pervasive business strategies?

The same industry which employs acqui-hire to find ways to cull competition?

DarkNova6•1h ago
An institution is worth nothing without the spirit, humanity and exchange of knowledge among the humanity behind it. The fostering of real expertise is difficult, but without this expertise you are doomed to believe whatever your Corporate AI is telling you.

So is the AI better?

No. It's quicker, easier, more seductive.

archievillain•1h ago
This is a good analogy, but you made it backwards. The "Clergy" fears the "Printing Press", as it acts as a tool of decentralized information spreading. But LLMs are not decentralized and thus are not the "Printing Press". LLMs are what the "Clergy" (say, for example, all the AI companies led by billionaires in cahoots with the west's most powerful government) uses to suppress the real "Printing Press" (the decentralized, open internet, where everybody can host and be reached).
dgb23•1h ago
It was the same clergy (or rather parts of it) that used the printing press to great success.

Martin Luther used it to spread his influence extremely quickly for example. Similarly, the clergy used new innovations in book layout and writing to spread Christianity across Europe a thousand years before that.

What is weird about LLMs though, is that it isn't a simple catalyst of human labor. The printing press or the internet can be used to spread information quickly that you have previously compiled or created. These technologies both have a democratizing effect and have objectively created new opportunities.

But LLMs are to some degree parasitical to human labor. I feel like their centralizing effect is stronger than their democratizing one.

bugglebeetle•1h ago
Everyone who tells the story of the reformation leaves out that Martin Luther also used this new technology to widely disseminate his deranged anti-Semitic lies and conspiracies, leading to pogroms against Jews, a hundred years of war across Europe, and providing the ideological basis for the rise of Nazism.
terminalshort•1h ago
Martin Luther was clergy, but he was absolutely not "the same clergy."
energy123•1h ago
This is a criticism of the author's backgrounds rather than the content of the article.
gabaix•1h ago
True. I myself try to read articles without looking up the authors.

It is hard though. When someone makes an extraordinary claim I feel the urge to look them up. It is a shortcut to some legitimacy to that claim.

NoboruWataya•1h ago
Most of the comments here are. HN hates lawyers.
__loam•1h ago
It's so ridiculous to make this argument when the people who stand to benefit the most from this technology are the massive corporations that can subsidize the compute and capital costs of this technology. Is it democratization when Google pulls something your wrote on your website then runs it through an LLM so they can serve it directly to a user? You say people see this as a threat to their status but the reality is this is a massive consolidation of the information economy of the internet in the hands of a few corporate interests.
mawadev•1h ago
I think this could be applied to most fields where LLMs move in. Let's take the field we are probably most familiar with.

Currently companies start to shift from enhancing productivity of their employees with giving them access to LLMs, they start to offshore to lower cost countries and give the cheap labor LLMs to bypass language and quality barriers. The position isn't lost, it's just moving somewhere else.

In the field of software development this won't be a an anxiety of an elite or threat to expertise or status, but rather a direct consequence to livelihood when people won't be hired and lose access to the economy until they retrain for a different field. So a layer on top of that you can argue with authority and control, but it rather has economic factors to it that produce the anxiety.

In that sense, doesn't any knowledge work have a monopoly on knowledge? It is the entire point to have experts in fields that know the details and have the experience, so that things can be done as expected, since not many have the time nor the capabilities to get into the critical details.

If you believe there is any good will when you can centralize that knowledge to the hands of even less people, you produce the same pattern you are complaining about, especially when it comes to how businesses are tweaking their margins. It really is a force multiplier and equalizer, but a tool, that can be used in good ways or bad ways depending on how you look at it.

ruraljuror•1h ago
Is that what happened? In Nexus, Harari looks at this exact same situation: the invention of the printing press, and shows how clergy used it to stoke witch hunts (ahem, misinformation) for decades--if not centuries. It was not for hundreds of years until after the invention of the printing press that we had The Enlightenment. What gave rise to The Enlightenment? Harari argues it is modern institutions.

It's not so simple that we can say "printing press good, nobody speak ill of the printing press."

anonymous908213•1h ago
It is funny watching people debate at length with your LLM word-vomit. I'm not sure whether you yourself are convinced that the soup you've copypasted across multiple replies means anything, but apparently some people are convinced enough to argue with it, so this is pretty great satire in one way or another.
b65e8bee43c2ed0•45m ago
it feels good to watch the aforementioned clergy kvetch about AI while multiple multi-trillion dollar corporations backed by a friendly administration continue to run their bulldozers :)
fatherwavelet•1h ago
The printing press was also used to print witch hunting books and caused 200 years of mass hysteria around witches and witch trials.

Before the printing press, only the clergy could "identity" witches but the printing press "democratized knowledge" of witch identification at larger scale.

The algorithmic version of "It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so" is going to cause huge trouble in the short and medium term.

toofy•58m ago
this is much much closer to going in reverse back to when the church were the deciders rather than liberating knowledge the way the printing press did.

the church did the thinking for the peasants. the church decided what the peasants heard, etc… this is moving absolutely in that direction.

the models now do the thinking for us, the ai companies decide what we get to see, these companies decide how much we pay to access it. this is the future.

paganel•1h ago
> free press

Stopped reading here, as these people still believe in that fairytale of theirs.

tlb•53m ago
The American press isn't perfectly free, but you should see what a state-controlled press is like.
terminalshort•1h ago
This is nothing but speculation written by lawyers in the format of a scientific paper to feign legitimacy. Of course those $500 an hour nitpickers are terrified of AI because it threatens the exorbitant income of their cartel protected profession.
__loam•1h ago
Enough people have gotten owned for using these things in court that I think the more likely response is laughing at the ignorance then feeling threatened.
terminalshort•1h ago
1. Get owned in court because you used an LLM that made a poor legal argument.

2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.

I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.

pousada•31m ago
Does it cost 100k minimum in the US to get a lawyer? Or am I misunderstanding something?
DannyBee•1h ago
Care to actually engage with the text instead of deciding to paint the entire profession with a crappy brush?

I guess i'll start with calling two well known law professors "$500 an hour nitpickers" when they don't earn 500 an hour and have been professors for 15+ years (20+ in Jessica's case), so aren't earning anything close to 500 an hour, is not a great start?

I don't know if they are nitpickers, i've never taken their classes :)

Also, this is an op-ed, not a science paper. Which you'd know if you had bothered to read it at all.

You say elsewhere you didn't bother to read anything other than the abstract, because "you didn't need to", so besides being a totally uninformed opinion, complaining about something else being speculation when you are literally speculating on the contents of the paper is pretty ironic.

I also find it amazingly humorous given that Jessica's previous papers on IP has been celebrated by HN, in part because she roughly believes copyright/patents as they currently exist are all glorified BS that doesn't help anything, and has written many papers as to why :)

terminalshort•1h ago
I dismiss the paper for 3 reasons:

1. It is entirely based on speculation of what is going to happen in the future.

2. The authors have a clear financial (and status based) interest in the outcome.

3. I have a negative opinion of lawyers and universities due to personal experience. (This is, of course, the weakest point by far.)

Speculation on future outcomes is not by itself a bad thing, but when that speculation is formatted like a scientific paper describing an experimental result I immediately feel I am being manipulated by appeal to authority. And the conflict of interest of the authors is about as irrelevant as pointing out that a paper on why Oxycodone is not addictive is paid for by Perdue Pharma. Perhaps Jessica's papers on IP are respected because they do not suffer from these obvious flaws? I owe the author no deference for the quality of her previous writing nor for her status as a professor.

__0x01•1h ago
> This is nothing but speculation

Did you read the paper?

terminalshort•1h ago
It's written in the future tense, so I can safely call it speculation. I've read the abstract which is all I need to decide the full text is not worth my time.
DannyBee•1h ago
Cool, then we can safely give your comments exactly the same treatment - since they are completely uninformed speculation about a paper you haven't read.
terminalshort•1h ago
And you must have read all 40 pages of it, right? Because if not you are a hypocrite. I claim that the Bible is the literal truth. Oh, you haven't read every word of the Bible? Your arguments against me are worthless!
well_ackshually•1h ago
Please go to court using only ChatGPT as legal defense, I'd love to see it, it's going to make for great entertainment. The judge a little bit less so.

You can criticise the hourly cost of lawyers all you like, and it should be a beautiful demonstration to people like you that no, "high costs means more people go into the profession and lower the costs" is not and has never been a reality. But to think that any AI could ever be efficient in a system such common law, the most batshit insane, inefficient, "rethoric matters more than logic" system is delusional.

boelboel•1h ago
Tech workers know it all, no way a non-tech job could be worth anything more than 20 dollars an hour.
contrarian1234•1h ago
Just from reading the abstract, it feels like the authors didn't even attempt at trying to be objective. It hard to take what they're saying seriously when the language is so loaded and full of judgments. The kind of language you'd expect in an Op-Ed and not a research paper
DannyBee•1h ago
I think you may be confused. This is not a research paper, it's an op-ed in a law journal.

SSRN is where most draft law review/journal articles are published, which may be the source of confusion.

For most other fields, it is a source of draft/published science papers, but for law, it's pretty much any kind of article that is going to show up in a law review/journal.

chilmers•43m ago
It is literally called “ Boston Univ. School of Law Research Paper No. 5870623”
jugoetz•38m ago
It's an essay. Being opinionated is a feature.
charcircuit•1h ago
None of these paper's arguments are AI specific. The IRS doesn't need AI to make mistakes and be unable to tell you why it did so. You can find stories of that happening to people already.
toofy•1h ago
i think when most people bring up mistakes that these models make, much of their concern is that little can be done.

when one of the juniors makes a mistake, i can talk to them about it and help them understand where they went wrong, if they continue to make mistakes we can change their position to something more suited for them. we can always let them go if they have too much hubris to learn.

who do we hold to account when a model makes a mistake? we’re already beginning to see, after major fuckups, companies blackhole nullrouting accountability into “not our fault, don’t look at us, ai was wrong”

the other thing is, if you have done a good job selecting your team, you’ll have people who understand their limits, who understand when to ask for help, who understand when they don’t know something. a major problem with current models is that it will always just guess or stretch toward random rather than halt.

so yes, people will make mistakes, but at least you can count on being able to mitigate for those after.

qsera•1h ago
We should be more worried what AI will due to the ability of an average human to think.

Not that I think there is a lot of thinking going on now anyway, thanks to our beloved smartphones.

But just think about a time when human ability to reason has atrophied globally. AI might even give us true Idiocracy!

fennecbutt•49m ago
I mean we were seeing this even before AI. It's the same type of person. To slop is human.

It's like for some reason we thought that like some good percentage of us aren't just tribal worker drones who fundamentally just want fats, sugars, salts, dopamine and seratonin. People actively vote against things like UBI, higher corporate taxes, making utilities public. People actively choose to believe misinformation because it suits their own personal tribal narratives.

juggle-anyhow•1h ago
Who do institutions serve? To me AI democratises information. Allows access to information that would normally be gatekept. AI reduces barriers, and they don't like that because those barriers gave them authority.
magpi3•1h ago
> Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

I am not sure if I am off-topic, but I am having a lot of trouble with this statement. Institutions are often opaque, and I have never belonged to an institution that empowered me to "take intellectual risks and challenge the status quo." Quite the contrary.

popalchemist•28m ago
"purpose-driven" is the relevant qualifier here.
intended•57m ago
I fear the title of this article is going drive most of the conversation.

I haven’t read through the whole thing yet, but so far the parts of the argument I can pull out are about how Institutions actually work, as in a collection of humans. AI, as it currently stands, interacts with humans themselves in ways that hollow out the kind of behavior we want from institutions.

“ Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it. But that is not the world we live in. Short-term political and financial incentives amplify the worst aspects of AI systems, including domination of human will, abrogation of accountability, delegation of responsibility, and obfuscation of knowledge and control”

An analogy that I find increasingly useful is that of someone using a forklift to lift weights at gym. There is an observable tendency when using LLMs, to cede agency entirely to the machine.

layer8•35m ago
I was amused at how they quote War Games.