This affordability is HEAVILY subsidized by billionaires who want to destroy institutions for selfish and ideological reasons.
Every large enough corporate wants to become the new Oracle.
(By the way, are you confusing affordance, the UX concept, with affordability?)
ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.
I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.
Similarly, today's critics, often from within the very institutions they defend, frame AI as a threat to "expertise" and "civic life" when in reality, they fear it as a threat to their own status as the sole arbiters of truth. Their resistance is less a principled defense of democracy and more a desperate attempt to protect a crumbling monopoly on knowledge.
The linguists who call AI a "stochastic parrot" are the perfect example. Their panic isn't for the public good; it's the existential terror of seeing a machine master language without needing their decades of grammatical theory. They are watching their entire intellectual paradigm—their very claim to authority—be rendered obsolete.
This isn't a grassroots movement. It's an immune response from the cognitive elite, desperately trying to delegitimize a technology that threatens to trivialize their expertise. They aren't defending society; they're defending their status.
It's some wild claim. Every linguist worth their salt had known that you don't need grammatical theory to reach native level. Grammar being descriptive rather than prescriptive is the mainstream idea and had been long before LLM.
If you actually ask them, I bet most linguists will say they are not even excellent English (or whichever language they studied the most) teachers.
Plus, "stochastic parrot" was coined before ChatGPT. If linguists really felt that threatened by the time when people's concerns over AI was like "sure it can beat go master but how about league of legends?" you have to admit they did have some special sights, right?
His central argument has always been that language is too complex and nuanced to be learned simply from exposure. Therefore, he concluded, humans must possess an innate, pre-wired "language organ"—a Universal Grammar.
LLMs are a spectacular demolition of that premise. They prove that with a vast enough dataset, complex linguistic structure can be mastered through statistical pattern recognition alone.
The panic from Chomsky and his acolytes isn't that of a humble linguist. It is the fury of a high priest watching a machine commit the ultimate heresy: achieving linguistic mastery without needing his innate, god-given grammar.
It really isn't. While I personally think the Universal Grammar theory is flawed (or at least Chomsky's presentation is flawed), LLM doesn't debunk it.
Right now we have machines that recognized faces better than humans. But it doesn't mean humans do not have some innate biological "hardware" for facial recognition that machines don't possess. The machines simply outperform the biological hardware with their own different approach.
Not all of them, but given the same questionable or outright false assumptions (e.g. AI companies are doing interference at a loss, the exaggerated water consumption number, etc) keeping getting repeated on YouTube, Reddit and even HN where the user base is far more tech-savvy than the population, I think misinformation is the primary reason.
Right, but in my comment I'm explicitly asking about the ones that don't have any relation yet seem to defend it anyways? "Don't people don't actually exists" isn't really an argument...
More like a war on the traditional, human-based knowledge, leveraged by people who believe that via coveting the world's supply of RAM, SSDs, GPUs, and what not, can achieve their own monopoly on knowledge under the pretense of liberating it. Note that running your own LLM becomes impossible if you can no longer afford the hardware to run it on.
We just have to start printing our own money and buying us some pocket armies and puppet politicians first.
Crypto currency makers can have artificial limits but no amount of limiting gpt-next access is cutting access to good enough.
So is the AI better?
No. It's quicker, easier, more seductive.
Martin Luther used it to spread his influence extremely quickly for example. Similarly, the clergy used new innovations in book layout and writing to spread Christianity across Europe a thousand years before that.
What is weird about LLMs though, is that it isn't a simple catalyst of human labor. The printing press or the internet can be used to spread information quickly that you have previously compiled or created. These technologies both have a democratizing effect and have objectively created new opportunities.
But LLMs are to some degree parasitical to human labor. I feel like their centralizing effect is stronger than their democratizing one.
It is hard though. When someone makes an extraordinary claim I feel the urge to look them up. It is a shortcut to some legitimacy to that claim.
Currently companies start to shift from enhancing productivity of their employees with giving them access to LLMs, they start to offshore to lower cost countries and give the cheap labor LLMs to bypass language and quality barriers. The position isn't lost, it's just moving somewhere else.
In the field of software development this won't be a an anxiety of an elite or threat to expertise or status, but rather a direct consequence to livelihood when people won't be hired and lose access to the economy until they retrain for a different field. So a layer on top of that you can argue with authority and control, but it rather has economic factors to it that produce the anxiety.
In that sense, doesn't any knowledge work have a monopoly on knowledge? It is the entire point to have experts in fields that know the details and have the experience, so that things can be done as expected, since not many have the time nor the capabilities to get into the critical details.
If you believe there is any good will when you can centralize that knowledge to the hands of even less people, you produce the same pattern you are complaining about, especially when it comes to how businesses are tweaking their margins. It really is a force multiplier and equalizer, but a tool, that can be used in good ways or bad ways depending on how you look at it.
It's not so simple that we can say "printing press good, nobody speak ill of the printing press."
Stopped reading here, as these people still believe in that fairytale of theirs.
2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.
I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.
I guess i'll start with calling two well known law professors "$500 an hour nitpickers" when they don't earn 500 an hour and have been professors for 15+ years (20+ in Jessica's case), so aren't earning anything close to 500 an hour, is not a great start?
I don't know if they are nitpickers, i've never taken their classes :)
Also, this is an op-ed, not a science paper. Which you'd know if you had bothered to read it at all.
You say elsewhere you didn't bother to read anything other than the abstract, because "you didn't need to", so besides being a totally uninformed opinion, complaining about something else being speculation when you are literally speculating on the contents of the paper is pretty ironic.
I also find it amazingly humorous given that Jessica's previous papers on IP has been celebrated by HN, in part because she roughly believes copyright/patents as they currently exist are all glorified BS that doesn't help anything, and has written many papers as to why :)
Did you read the paper?
You can criticise the hourly cost of lawyers all you like, and it should be a beautiful demonstration to people like you that no, "high costs means more people go into the profession and lower the costs" is not and has never been a reality. But to think that any AI could ever be efficient in a system such common law, the most batshit insane, inefficient, "rethoric matters more than logic" system is delusional.
Not that I think there is a lot of thinking going on now anyway, thanks to our beloved smartphones.
But just think about a time when human ability to reason has atrophied globally. AI might even give us true Idiocracy!
sean_the_geek•1h ago