Someone should tell Anna’s Archive.
The US’s criminal enforcement is very much biased into the “rules for thee, but not for me” category, but invoking it here is a trope. Anyone can get away with piracy on the scale of Books3 or The Pile. The reason random people don’t make models is because the hardware and power costs are fucking astronomical, not because they can’t get away with downloading the training data.
These sort of hot takes are just as wrong as the breathless “AGI is right around the corner” ones.
AI is hugely transformative, and anyone who thinks it’s overhyped doesn’t know the SOTA. It will likely be the single biggest technological advancement of our lifetime.
This book thing at Meta is something we should never forget. It revealed how utterly broken the US is in this regard, hope they get it sorted. Without the rule of law you'll get a shit country.
This has always been the case.
By your definition, the US has never really had the rule of law.
The obvious counter example would be Aaron Swartz
2. We don’t know if he would have gotten away with it or not. Mental illness killed him via suicide, not the federal indictment.
There are several EXTREMELY large pirate libraries in operation presently that anyone can use. They are actively getting away with it, likely because they are explicitly staying anonymous.
Maybe if we all pretend AI is totally useless and will never improve, then I won’t have to worry about my job or economic value changing?
Here is the core argument: "an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
1. Bird flight is a capability of a complex dynamic system -- the bird's musculoskeletal system and its brain.
2. Systems of these sort cannot be modelled mathematically in a way that allows them to operate inside a machine.
The previous solution is a biological brain, and the future solutions are mechanical, but that doesn't matter. Even if it did, such arguments involve little more than waving one's hands about and claiming that there's some poorly specified fundamental difference.
There isn't.
And if the answer is nothing - what would prevent such an dynamic system from being emulated? If the answer is real time data, this can be fed into the 'world model' of the emulation in numerous ways.
If I put a website up, everything goes as long as people don't DDoS me. A human crawls at ~1Hz, if a bot tries 1000Hz, this is a denial of service attack. It's hard to block since you can't rate limit IPs due to many people sharing the same IP. So you need heuristics, cookies, etc.
Putting paywalled content in the AI is not cool though (such as books), nobody was anticipating this, people got effed unexpectedly. This is piracy on hyperscale. Not fair.
Personally, I eschewed all of those — except for LLMs. I'm convinced this one's for real. People though can use "hype" to mean a number of things.
AGI by the end of the year? Hype.
Decimation of white-collar jobs? Hype.
Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.
> Eventually, these companies tried something new: agents.
Yeah, that one's still on the hype shelf for me.
> This floods social media and websites with low-quality, copy-paste content.
No! Welp, there goes social media. ;-)
> ChatGPT has around 500 million weekly active users, but only around 20 million of them actually pay for a subscription. That means the vast majority of people think it’s not worth $20 a month.
You could say the same for YouTube (and likely wildly surpass that scenario).
When you offer a free version, don't be surprised if most users (meekly raises hand) save their pocket money for something else. These are early days and people are sussing out what the thing can do for them.
> To me, Apple stands out. They aren’t trying to build a chatbot that knows everything. Instead, they’re focused on using AI as a tool to help people interact with their apps and data in smarter ways.
That feels like Apple in gap-filling mode: trying to show the world they're doing something while smart people are trying to figure out what Apple really ought to be doing.
They have their own chip dies — could perhaps do a dedicated LLM client architecture that allows it to run on-device? It makes you wonder what Jony Ives (and investors) could possibly be thinking when Apple could easily pivot and own the aiPhone market.
Waiting for my own aiPhone someday — with encrypted history saved to my cloud account. Wondering what it will be like for future generations who will have had a personal confidant for decades — since they were teenagers…
Clearly a lot did change but most of the bolder predictions still ended up not coming true.
So often so that I'm somewhat surprised I keep reading articles that say essentially, "Don't believe market-speak from someone who is trying to sell you something."
Yeah, you should never do that.
Or that minus a few exceptions like blockbuster the majority of high street /mall stores would still exist in spite of online shopping being given more favourable tax treatment.
Or that democratic institutions would end up being eroded by the toxic spam that popped up when the barrier for entry for publishing was lowered.
I somewhat agree with both your and GP perspectives. It's getting more hype than it has earned, and the promise that this path leads to AGI, despite 10× sizes of models yielding diminishing returns on performance. But it's not vaporware, it can produce fluent text faster and cheaper than humans, so it doesn't go in the "why are they buying?" bin with NFTs.
The questions getting lost in the middle is "do we need to churn out even more code and trust that it's been reviewed?" and "is using this going to semi-permanently disable most of the knowledge workers?"
If there's even a chance that my executive functions and related mental faculties are degraded by using LLMs then I would rather not. I try it a little and keep a finger on the pulse of the community that are going all-in on it. If it does transform into something that's 99% accurate and with a knob letting me dictate volume of output, I'll put more effort into learning how to hold it. And hopefully by then we'll be able to confirm or refute any of the long-term side effects.
Just install Claude Code in Yolo and Sudo (sudo pwd in claude.md) on a server or laptop and just interact with the computer through it.
for me this changed everything
I generally agree with your statements. But i personally tend to think of of the various ML flavors as only a natural evolution of the same Turing/von Neumann paradigms. Neural networks simulate aspects of cognition but don’t redefine the computation model. They are vectorized functions, optimized using classical gradient descent on finite machines.
Training and inference pipelines are composed of matrix multiplications, activation functions, and classical control flow—all fully describable by conventional programming languages and Turing Machines. AI, no matter how sophisticated, does not violate or transcend this model. In fact, LLMs like ChatGPT are fully emulatable by Turing machines given sufficient memory and time.
(*) Not playing the curmudgeon here, mind you, only trying to keep the perspective, as hype around "AI" often blurs the distinction between paradigm and application.
Well, I mean, sure, but no-one's claiming that Youtube's going to fundamentally change the world.
Same for many of these “AI companies” that are burning through cash in a race to the bottom towards a commodity with no real prospects for a sustainable business model. The tech is cool, and can be useful, but the business aspects of all this is a forrest full of dry timber waiting for one strike of lightning to burn the whole thing to the ground.
I've seen crash after crash, all softened with taxpayer bailouts, and economic recovery within a couple of years. Often, to "booming" economies, which just means "compared to during the crash".
Should your crash come to pass, it will be another part of the news cycle, 4% to 8% of people will be out of work for a few months to a year, and nothing will happen to the companies responsible.
In fact, they'll get bonuses for pre-crash performance.
This is how history has played out for decades.
The whole thing is bad and disingenuous (somehow the very real impact of excessive crawling by AI companies in an indictment of the value of the output).
And it just gets worse. For instance:
> If you’re looking something up [on a search engine], you usually type a few keywords and get a list of links. But with a chatbot, you have to write full sentences, and how fast you can type limits how fast you can interact. Then, instead of getting quick, scannable links, you get a big block of text. You read it—but you’re always aware it might be wrong. On a regular search engine, you can judge a source just by looking at the domain of the website, the design of the page or even reading the “About” page.
Got that? Chatbots make you type whole sentences, and instead of a short list of links the reader can easily scan, click through, analyze quality of graphic designs, and read the obviously-totally-trustworthy About pages to determine accuracy… you get answers in one place that could be wrong.
The fact that all chatbots include citations that you can click to do the same rigorous design-based fact checking is omitted, presumably because it would weaken the argument.
There are legit reasons to dislike the ethics of AI companies, and there are legit reasons believe this is a dot bomb style bubble, and there are legit reasons to be skeptical that the tech has enough headroom to reach AGI.
But this article just puts little bits of each in a blender and hopes for the best. It’s funny because while decrying “hype”, it uses all the same cheap and lazy rhetorical techniques as the worst AI hypesters. Further illustrating the “you become what you hate” principle, I suppose.
I've noticed that I've solved quite a few problems simply by being forced to spell out the precise problem statement to an AI bot. I knew the answer as soon as I had finished typing the question out in full, and watching the AI confirm my suspicions was superfluous but gratifying.
I also now feel guilty for not providing the same level of detail to other people that I've tasked with something.
Relatable! It's because long messages are for uncool nerds. Cool people write short ambiguous messages. But in the presence of AI, we let go of our shame and write to our hearts content!
Meanwhile unintentionally ultrarlhfrelevant title photo.
Thank you for the idea.
Considering how many free offerings there are, this might actually just work.
1. They train on website data without permission
2. They require a lot of electricity
3. Kids use them to cheat on homework
And honestly this is where I stopped reading. I’m the biggest AI hater but none of these are good arguments against AI (I would argue that #3 is actually an argument for AI). If this is what you’re leading with then I’m not particularly interested in reading the rest.
> They’re just tools. If they disappeared tomorrow, it wouldn’t affect how I work or live.
This sounds like prepper mentality? Or is it more objectively sound like quitting Facebook?
> In the end, I see AI for what it is: a powerful but limited tool—not a revolution, not a replacement for human thinking, and definitely not something worth worshipping.
How much of a "revolution" it is depends on your field though. I think computer programming is still the field that is most impacted by potential productivity improvements from these tools.
This seems to be a common disconnect. If you're using the free version of ChatGPT, you don't get to see what everybody else is seeing, not even close.
> None of the past “big things” were pushed like this. They didn’t get flooded with billions in investment before proving themselves
Oh, sweet summer child ^^ I assume Mert was not around to witness the internet boom and bust. Exactly this happened.
There is a lot of conflation in this article. It cites a lot of ethical concerns around the sourcing and training of data, expected job losses and the issues around that, but those are not reasons to doubt the _efficacy_ of AI. There are surprisingly few and weak arguments as to why the hype is not justified, presumably because the author hasn't used powerful models (see above).
It's possible to believe the hype is real and still to find AI unethical. But this article just mixes it all into a big pot of "AI bad" without addressing the cognitive dissonance required to believe both "AI is not very useful" and "AI will eliminate problematic numbers of jobs".
Certainly there are people who overhype AI/LLMs. Also any discussion of AGI is a mere speculation at this point, but you can't deny that LLMs are revolutionary tools, and we are still learning their limits. I find it bizarre when people deny that.
the jobs killed by AI is for me mostly zero sum thinking in action, also all AI companies seem to be just one AI company instead of a (currently) healthy competition. and apparently all users are duped cause they suddenly are up to a 100x more productive.
and yes, the energy needed is horrible (is AI now over bitcoin or still below).
the anecdotes ad hallucinations are true, but what about the success stories? ie faster vacation research? the fact that now everybody with chatgpt has now access to a world class team of doctors (where perviously often there was none).
yes, criticism of AI and AI companies is good and necessary. said that: even if all technological progress would stop right now it is the most meaningful technological change I experienced in the last 25 years I am working it/web stuff.
Also, I learned about Bitcoin when it was worth 8 USD and got obsessed with the tech but always thought it was over hyped. I never owned one Satoshi. I still think crypto ended up being hype and not adding real value to the world. But I could be very very wealthy if I had jumped to that hype train XD.
I think with all the hype, AI does provide some real value to the world. That's the train I'm jumping in.
There is more to it but it requires more effort to learn and see for yourself instead of repeating laundry list of “why is it bad” things posted by all the other AI doomers.
For people who want to believe the author, this flooding the zone approach is catnip. They get the confirmation they need. The people who don't want to believe the author will just dismiss the entire article after the first argument they recognize as invalid.
And for the people in the middle, it's impossible to have a good discussion about the article and get a better understanding because there are too many unrelated arguments packed together. Trying to rebut or support just one of them just feels like a waste of time.
If you're trying to write an effective anti-AI article, it really would be way more effective to pick the 1-2 strongest points (whatever they are).
The only personal reason comes at the end where he says he would never log in to use an AI product. The entire rant could have been a tweet.
I think the DotCom bubble would fit the above description (smaller numbers of $, but similar hype).
I think we entered this reality a while ago from the rampant use of social media, dating apps, and short form content.
baq•5h ago
Meanwhile, model providers are serving millions if not billions of tokens daily.
Don't want to say this is a dropbox comment class blog post, but certainly it... ignores something.
stockerta•5h ago
Applejinx•5h ago
monsieurbanana•5h ago
That doesn't change their usefulness, if tomorrow they all increase the price x10 it will remain useful for many use cases. Not to mention than in a year or two the costs might go down an order of magnitude for the same accuracy.
singingfish•5h ago
_heimdall•5h ago
I expect a query language to be deterministic, and I expect the other end of the query to only return data that actually exists. LLMs are neither of those, so to me they are impressive natural language engines but they aren't ready a tool for querying human knowledge.
smcl•5h ago
tmaly•19m ago
candiddevmike•5h ago
akkad33•5h ago
pornel•5h ago
Aldipower•5h ago