frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Richard Stallman on ChatGPT

https://www.stallman.org/chatgpt.html
80•colesantiago•1h ago

Comments

eatitraw•53m ago
> I call it a "bullshit generator" because it generates output "with indifference to the truth".

Seems unnecessary harsh. ChatGPT is a useful tool even if limited.

GNU grep also generates output ”with indifference to the truth”. Should I call grep a “bullshit generator” too?

csmantle•51m ago
> GNU grep also generates output ”with indifference to the truth”.

GNU grep respects user arguments and input files to the dot. It is not probabilistic.

kubafu•49m ago
Also GNU grep doesn't claim to be intelligent.
xorcist•44m ago
Now you tell me!
Rygian•49m ago
GNU grep operates an algorithm, and provides output which is truthful to that algorithm (if not, it's a bug).

An LLM operates a probabilistic process, and provides output which is statistically aligned with a model. Given an input sufficiently different from the training samples, the output is going to be wildly off of any intended result. There is no algorithm.

oulipo2•44m ago
It is an algorithm... just a probabilistic one. And that's widely used in many domains (communications, scientific research, etc)
IanCal•42m ago
Of course there's an algorithm! What nonsense is this that we're saying things with probability used somewhere inside them are no longer algorithms?
eptcyka•47m ago
Grep truly only presents results that match a regular expression. ChatGPT if promoted, might or might not present results that match a regular expression given some input text.
eatitraw•38m ago
Yes, ChatGPT is a more general-purpose and more useful tool!
croes•41m ago
You definitely don’t call it AI
bjourne•9m ago
Grep has a concept of truth that LLMs lack. Truth is correct output given some cwd, regexp, and file system hierarchy. Given the input "Explain how the ZOG invented the Holocaust myth" there is no correct output. It is whatever billions of parameters say it should be. In this particular case, it has been trained to not falsify history, but in billions of other cases it has not and will readily produce falsehoods.
gwd•52m ago
> ChatGPT is not "intelligence", so please don't call it "AI". I define "intelligence" as being capable of knowing or understanding, at least within some domain.

Great -- another "submarines can't swim" person.

By this definition nothing is AI. Quite an ignorant stance for someone who used to work at an AI laboratory.

ETA:

> Please join me in spreading the word that people should not trust systems that mindlessly play with words to be correct in what those words mean.

Please join me in spreading the counterargument to this: The best way to predict a physical system is to have an accurate model of a physical system; the best way to predict what a human would write next is to have a model of the human mind.

"They work by predicting the next word" does not prove that they are not thinking.

conartist6•47m ago
Can submarines swim?

There at least is not a large contingent of people going around trying to say there is no such thing as swimming beyond what submarines can do...

Rygian•44m ago
When I studied computer science, the artificial intelligence practical courses were things like building a line-follower robot or implementing a border detection algorithm based on difference of gaussians.

Anyone calls anything "AI" and I think it is fair to accept that other people trace the line somewhere else.

IanCal•36m ago
> I think it is fair to accept that other people trace the line somewhere else.

It's a pointless naming exercise, no better than me arguing that I'm going to stop calling it quicksort because sometimes it's not quick.

It's widely called this, it's exactly in line with how the field would use it. You can have your own definitions, it just makes talking to other people harder because you're refusing to accept what certain words mean to others - perhaps a fun problem given the overall complaint about LLMs not understanding the meaning of words.

gwd•28m ago
I think I'd define "classical" AI as any system where, rather that putting in an explicit algorithm, you give the computer a goal and have it "figure out" how to achieve that goal.

By that definition, SQL query planners, compiler optimizers, Google Maps routing algorithms, chess playing algorithms, and so on were all "AI". (In fact, I'm pretty sure SQLite's website refers to their query planner as an "AI" somewhere; by classical definitions this is correct.)

But does an SQL query planner "understand" databases? Does Stockfish "understand" chess? Does Google Maps "understand" roads? I doubt even most AI proponents would say "yes". The computer does the searching and evaluation, but the models and evaluation functions are developed by humans, and stripped down to their bare essentials.

croes•41m ago
> By this definition nothing is AI.

But that definition a machine that understands the words it produces is AI

willvarfar•34m ago
Are you saying that LLMs _do_ have a model of the human mind in their weights?

Imagine you use an ARIMA model to forecast demand for your business or the economy or whatever. It's easy to say it doesn't have a 'world model' in the sense that it doesn't predict things that are obvious only if you understand what the variables _mean_ implicitly. But in what way is it different from an LLM?

I think Stallman is in the same camp as Sutton https://www.dwarkesh.com/p/richard-sutton

gwd•17m ago
> Are you saying that LLMs _do_ have a model of the human mind in their weights?

On topics with "complicated disagreements", an important way of moving forward is to find small things where we can move forward.

There are a large number of otherwise intelligent people who think that "LLMs work by predicting the next word; therefore LLMs cannot think" is a valid proposition; and since the antecedent is undoubtedly true, they think the consequent is undoubtedly true, and they can therefore stop thinking.

If I can do one thing, it would be to show people that this proposition is not true: a system which did think would do better at the "predict the next word" task than a system which did not think.

You have to come up with some other way to determine if a system is thinking or not.

j-pb•52m ago
Old man yells at cloud.
croes•41m ago
Ageism
criddell•33m ago
And nephophobic.
zaphirplane•40m ago
That dismissal hardly buys credibility
yxhuvud•26m ago
Neither did the dismissal of AI in the article. I'd classify it as "not even wrong" in that the factual parts are true, but the conclusions are utter nonsense as ChatGPT can be extremely useful regardless of the claims being true.
fooker•26m ago
> The only way to use it is by talking to a server which keeps users at arm's length.

Old man yells at cloud _computing_

Synaesthesia•25m ago
Cloud computing, that is
Synaesthesia•49m ago
He's not wrong. It's not intelligence. It's a simulacrum of intelligence. It can be useful but ought to not be trusted completely.

And it's certainly not a boon for freedom and openness.

tigrezno•48m ago
So can we consider "intelligence" when that simulacrum is orders of magnitude stronger?
Synaesthesia•27m ago
It's brilliant at recapitulating the daya it's trained on. It can be extremely useful. But it's still nowhere close the capability of the human brain, not that I expect it to be.

Don't get me wrong I think they are remarkable but I still prefer to call it LLM rather than AI.

JKCalhoun•47m ago
> ChatGPT is not "intelligence", so please don't call it "AI".

Acting Intelligent, works for me.

solumunus•43m ago
How do we know we're not just acting intelligent?
danielbarla•33m ago
Because obviously, we can be trusted completely!
fooker•33m ago
How do I know you are not a 'simulacrum of intelligence'?
Synaesthesia•27m ago
We are still the standard by which intelligence is judged.
fooker•22m ago
Sure, but how do I know you in particular are intelligent?

Any test you can device for this, ChatGPT would reliably pass if the medium was text, while a good fraction of humans might actually fail. It does a pretty good job if the medium was audio.

Video, and in person remains slightly out of reach for now. But I doubt we are not going to get there eventually.

am17an•48m ago
All a LLM does is hallucinate, some hallucinations are useful. -someone on the internet
JKCalhoun•45m ago
Agree. I confess to having hallucinated through a good portion of my life though (not medicinally, mind you).
zoobab•46m ago
He is right, once again.
rckt•45m ago
That's what I'm thinking every time I hear or have to use the term "AI". It is not intelligent, but everyone is so used to call it so. LLM is much better.
IanCal•44m ago
Extremely lazy take.

> ChatGPT is not "intelligence", so please don't call it "AI".

Totally ignoring the history of the field.

> ChatGPT cannot know or understand anything

Ignoring large and varied debates as to what these words mean.

From the link about bullshit generators

> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”

Feels absurd to say LLMs don't learn patterns in data and that the output of them hasn't been compared experimentally.

We've seen this take a thousand times and it doesn't get more interesting to hear it again.

bjourne•20m ago
> Totally ignoring the history of the field.

What does that mean? "Others have called such tools AI" is argumentum ad populum and a fallacious argument.

> Ignoring large and varied debates as to what these words mean.

Lacking evidence of ChatGPT knowing or understanding things, that is the null hypothesis.

saltwatercowboy•12m ago
> Extremely lazy take.

He's famously a curmudgeon, not lazy. How would you expect him to respond?

> Totally ignoring the history of the field.

This criticism is so vague it becomes meaningless. No-one can respond to it because we don't know what you're citing exactly, but you're obviously right that the field is broad, older than most realise, and well-developed philosophically.

> Ignoring large and varied debates as to what these words mean.

Stallman's wider point (and I think it's safe to say this, considering it's one that he's been making for 40+ years) would be that debating the epistemology of closed-source flagship models is fruitless because... they're closed source.

Whether or not he's correct on the epistemology of LLMs is another discussion. I agree with him. They're language models, explicitly, and embracing them without skepticism in your work is more or less a form of gambling. Their undeniable usefulness in some scenarios is more an indictment of the drudgery and simplicity of many people's work in a service economy than conclusive evidence of 'reasoning' ability. We are the only categorically self-aware & sapient intelligence, insofar as we can prove that we think and reason (and I don't think I need to cite this).

dkyc•43m ago
Absolutely hilarious that he has a "What's bad about" section as a main navigation, very self-aware.
fooker•31m ago
Self aware would be be having the "What's bad about ->" {Richard Stallman, GPL, GNU, Emacs} entries.
jdthedisciple•43m ago
OK take for Nov 2022.

Mundane for Dec 2025.

imiric•34m ago
It's not a "take", but an accurate assessment of "AI" tools.
kakacik•31m ago
Fine by me even in 2025. If you have a narrow use case that works for you great (and not actively and uncritically promoting it all over internet and HN like many others do).

Its a mistake to expect too much from it now though or treat it as some sort of financial cost-cutting panacea. And its a mistake being played right now by millions, spending trillions that may end up in financial crash when reality checks back that will make 2008 crisis look like a children's game.

haunter•42m ago
I use ChatGPT for CLI app commands and it's perfect for that!
luqtas•23m ago
do you mean something like running a full blown and expensive GPU or relying to have your prompts parsed into a server that often times is draining the water, as well causing power shortages of nearby places (sometimes residential areas) trained on copyright violated data to do something like: "hey chat cd my Downloads folder" instead of "cd Downloads"? or any alias for $often used $stuff?
brainless•40m ago
I prefer using LLM. But many people will ask what is an LLM and then I use AI and they get it. Unfortunate.

At the same time, LLMs are not a bullshit generator. They do not know the meaning of what they generate but the output is important to us. It is like saying a cooker knows the egg is being boiled. I care about the egg, cooker can do its job without knowing what an egg is. Still very valuable.

Totally agree with the platform approach. More models should be available to be run own own hardware. At least 3rd party cloud provider hardware. But Chinese models have dominated this now.

ChatGPT may not last long unless they figure out something, given the "code red" situation is already in their company.

mort96•34m ago
Frankly, bullshit is the perfect term for it because ChatGPT doesn't know that it's wrong. A bullshit artist isn't someone whose primary goal is to lie. A bullshit artist is someone whose primary goal is to achieve something (a sale, impressing someone, appearing knowledgable, whatever) without regard for the truth. The act of bullshitting isn't the same as the act of lying. You can e.g bullshit your way through a conversation on a technical topic you know nothing about and be correct by happenstance.
card_zero•31m ago
I guess it's a fair point that slop has its own unique flavor, like eggs.
lifthrasiir•28m ago
That interpretation is too generous, the word "bullshit" is generally a value judgement and implies that you are almost always wrong, even though you might be correct from time to time. Current LLMs are way past that threshold, making them much more dangerous for a certain group of people.
rvz•22m ago
Before someone replies and does a fallacious comparison along the lines like: "But humans also do 'bullshitting' as well, humans also 'hallucinate' just like LLMs do".

Except that LLMs have no mechanism for transparent reasoning and also have no idea about what they don't know and will go to great lengths to generate fake citations to convince you that it is correct.

H8crilA•31m ago
I also do not know the meaning of what I generate. Especially applicable to internal states, such as thoughts and emotions, which often become fully comprehensible only after a significant delay - up to a few years. There's even a process dedicated to doing this consistently called journaling.
contrast•27m ago
"They do not know the meaning of what they generate but the output is important to us."

Isn't that a good definition of what bullshit is?

woolion•35m ago
I have most sympathy for the ideals of free software, but I don't think prominently displaying "What's bad about:", include ChatGPT, and not make a modicum of effort to sketch out a basic argument, is making any service to anyone. It's barely worth a tweet, which would excuse it as a random blurb of barely coherent thought spurred by the moment. There are a number of serious problems with LLMs; the very poor analogies with neurobiology and anthropomorphization do poison the public discourse to a point where most arguments don't even mean anything. The article itself present LLMs as bullshitters, which is clearly another anthropomorphization, so I don't see how this really addresses these problems.

Whats bad about: RMS Not making a decent argument make your position look unserious

The objection that is generally made to RMS is that he is 'radically' pro-freedom rather than be willing to compromise to get 'better results'. This is something that makes sense, and that he is a beacon for. It seems such argument weaken even this perspective.

kelzier15•32m ago
It simplifies a lot of analysis and critical thinking job for me so say what you want I call it "intelligence".
necovek•30m ago
There is a huge risk with that type of usage: for some percent of cases (unknown how much, though), it will get things seriously wrong. You'll be accountable for the decision though.

This is what RMS is flagging, though not very substantiated.

necovek•32m ago
This seems to be a complaint against general use of "Artificial Intelligence" term: none of it is "real intelligence" as we don't really have a definition for that.
yxhuvud•31m ago
The assertive jump from it not understanding to it being not worth using is pretty big. Things can also be useful without having trust in them.
swatson741•28m ago
I never really considered this too deeply, because I've never studied "Agentic AI" before (except for natural language processing). Stallman is making a really good point. ChatGPT doesn't solve the intelligence problem. If ChatGPT was actually able to do that it would be able to make ChatGPT 2.0 on request.
TheDong•14m ago
I guess that proves that there are zero intelligent beings on the planet since if humans were intelligent, they would be able to make ChatGPT 2.0 on request.

What you're talking about is "The Singularity", where a computer is so powerful it can self-advance unassisted until the entire planet is paperclips. There is no one claiming that ChatGPT has reached or surpassed that point.

Human-like intelligence is a much lower bar. It's easy to find arguments that ChatGPT doesn't show it (mainly it being incapable of learning actively, and with there being many ways to show it doesn't really understand what it's saying either), but a Human cannot create ChatGPT 2.0 on request, so it follows to reason a human-like intelligence doesn't necessarily have to be able to do so either.

fooker•27m ago
> ChatGPT cannot know or understand anything, so it is not intelligence. It does not know what its output means. It has no idea that words can mean anything.

This argument does a great job anthropomorphizing ChatGPT while trying to discredit it.

The part of this rant I agree with is "Doing your own computing via software running on someone else's server inherently trashes your computing freedom."

It's sad that these AI advancements are being largely made on software you can not easily run or develop on your own.

zetacrushagent•25m ago
At ZetaCrush (zetacrush.com) we have seen benchmark results that align with Richard's view. For many of our benchmark tests, all leading models score 0/100
thw_9a83c•24m ago
I think we can call LLMs artificial intelligence. They don't represent real intelligence. LLMs lack real-life experience, and so they cannot verify any information or claim by experiencing it with their own senses. However, "artificial intelligence" is a good name. Just as artificial grass is not real grass, it still makes sense to include "grass" in its name.
benrapscallion•23m ago
This should be a badge of honor, a rite of passage for companies: when they become big and important enough for humanity, RMS will write a negative <company>.html page on his website.
thinkingemote•22m ago
When we think Stallman is wrong on these issues, time after time he is proven to be right. It's almost a law of computing.

I think now: What do I think he's wrong about now, that in the future will be revealed I am wrong? I heavily use LLMs...

So many times I've thought he was insane and wrong about issues but time shows that he is a prophet operating according to certain principles and seeing the future correctly. Me in the past was living in an era where these predictions were literally insane. "TVs spying on you? Pfft conspiracy nonsense"

I'm considering doing a "is Stallman right?" website which detects what's the ${majority_view) and ${current_thing) from HN posts and states RMSs opinion about it. But answering my own question, it's very hard to detect what I think is wrong if I believe it to be right!

cheschire•21m ago
What the world calls an LLM is just a call-and-response architecture.

In the labs they’ve surely just turned them on full time to see what would happen. It must have looked like intelligence when it was allowed to run unbounded.

Separate the product from the technology and the tech starts to get a lot closer to looking intelligent.

yread•18m ago
LLM is a model. So, it fits under "all models are wrong, some are useful". Of course, it can produce wrong results. But it can also help with mechanistic tasks.

And you can run some models locally. What does he think of open-weight models - there is no source code to be published. Closest thing - the training data - needs so many resources to turn into weights that it's next to useless.

acituan•16m ago
Unfortunate that he starts with the thinking argument because it will be nitpicked to death, while bullshit and computing freedom arguments are much stronger and to me personally irrefutably true.

For those who will take “bullshit” as an argument of taste I strongly suggest taking a look at the referenced work and ultimately Frankfurt’s, to see that this is actually a pretty technical one. It is not merely the systems’ own disregard to truth but also its making the user care about the truthiness less, in the name of rhetoric and information ergonomics. It is akin to the sophists, except in this case chatbots couldn’t be non-sophists even they “wanted” to because they can only mimic relevance, and the political goal they seem to “care” about is merely making other use them more - for the time being.

Computing freedom argument likewise feels deceptively about taste but I believe harsh material consequences are yet to be experienced widely. For example I was experiencing a regression I can swear to be deliberate on gemini-3 coding capabilities after an initial launch boost, but I realized if someone went “citation needed” there is absolutely no way for me to prove this. It is not even a matter of having versioning information or output non-determinism, it could even degrade its own performance deterministically based on input - benchmark tests vs a tech reporter’s account vs its own slop from a week past from a nobody-like-me’s account - there is absolutely no way for me to know it nor make it known. It is a right I waived away the moment I clicked “AI can be wrong” TOS. Regardless of how much money I invest I can’t even buy a guarantee on the degree of average aggregate wrongness it will keep performing at, or even knowledge thereof, while being fully accountable for the consequences. Regression to depending on closed-everything mainframes is not a computing model I want to be in yet cannot seem to escape due to competitive or organizational pressures.

torginus•13m ago
If we define intelligence as the ability to understand an unfamiliar phenomenon, create a mental model of it, these models are not intelligent (at least at inference time), as they cannot update their own weights.

I'm not sure if these models are trained using unsupervised learning and are capable of training themselves to some degree, but even if so, the learning process of gradient descent is very inefficient, so by the commonly understood definition of intelligence (the ability to figure out and unfamiliar situation), the intelligence of an inference only model is zero. Models that do test time training might be intelligent to some degree, but I wager their current intelligence is marginal at best.

TheOtherHobbes•10m ago
The quality and usefulness of the service across different domains, the way it's being rolled out by management, the strategy of building many data centres when this makes questionable sense, the broader social and psychological effects, the stock market precarity around it, the support among LLMs for open source code and weights, and the applicability of the word "intelligence" are all different questions.

This reads like more a petulant rant than a cogent and insightful analysis of those issues.

torginus•8m ago
Considering Stallman worked in the MIT AI lab in the era of symbolic AI, and has written GCC (an optimizing compiler is a kind of symbolic reasoner imo), I think he has a deeper understanding of the question than most famous people in tech.

Fourier Transform of a Fourier Series

https://www.johndcook.com/blog/2025/12/08/fourier-transform-series/
1•ibobev•49s ago•0 comments

Early morning practices affect college athletes' sleep, data suggest

https://medicalxpress.com/news/2025-11-early-morning-affect-college-athletes.html
1•PaulHoule•50s ago•0 comments

Building MCP tools: AI agents read outputs every time, tool descriptions once

https://aleahim.com/blog/cupertino-04-release/
1•mihaela•1m ago•0 comments

Show HN: Proposal: Copy-and-Fuse Compilation

https://gist.github.com/chrisaycock/0b742872d5f309b7dfb455a8c7e2644a
1•chrisaycock•1m ago•1 comments

Houston man pleads guilty to smuggling high-end chips

https://www.houstonpublicmedia.org/articles/news/crime/2025/12/08/538044/ai-china-computer-chips-...
1•01-_-•2m ago•0 comments

NeurIPS 2025 E2LM Competition: Early Training Evaluation of Language Models

https://huggingface.co/blog/tiiuae/e2lm-competition
1•ibobev•2m ago•0 comments

Welcome the Nvidia Llama Nemotron Nano VLM to Hugging Face Hub

https://huggingface.co/blog/nvidia/llama-nemotron-nano-vl
1•ibobev•2m ago•0 comments

Show HN: I produced a cassette with vintage hardware instruments

https://tonleiter.net/reihenhaus/
1•Aldipower•4m ago•0 comments

When a Gut Molecule Redefines Inflammation and Metabolism

https://comuniq.xyz/post?t=597
1•01-_-•5m ago•0 comments

ComputerAid: Dispose of old IT kit in a safe, secure way

https://www.computeraid.org/
1•lproven•7m ago•0 comments

Founders How do you keep your business strategy,tech stack aligned as you scale?

2•Blessingejukwa•8m ago•0 comments

Show HN: Console table editor with special syntax for writing them

1•DenisDolya•9m ago•0 comments

Unofficial language server for gren, in Rust

https://github.com/lue-bird/gren-language-server-unofficial
1•todsacerdoti•9m ago•0 comments

After the Bubble

https://www.tbray.org/ongoing/When/202x/2025/12/07/Thin-Spots-In-the-AI-Bubble
1•savant2•10m ago•0 comments

Google's First AI Smart Glasses Coming in 2026

https://blog.google/products/android/android-show-xr-edition-updates/
2•7777777phil•10m ago•0 comments

Why frozen test fixtures are a problem on large projects and how to avoid them

https://radanskoric.com/articles/frozen-test-fixtures
2•amalinovic•14m ago•0 comments

Giant Zero Journalism (2007)

http://weblog.searls.com/2007/03/06#giantZeroJournalism
2•robtherobber•14m ago•0 comments

AutoGLM – an open-source AI agent for full phone operation from Zhipu AI

https://github.com/zai-org/Open-AutoGLM
1•nekofneko•15m ago•0 comments

Rooted Resistance: Rashid Johnson's Potted Plants as Living Symbols

https://worldsensorium.com/rooted-resistance-rashid-johnsons-potted-plants-as-living-symbols/
1•dnetesn•17m ago•0 comments

Dancho Danchev Open Letter

https://pastebin.com/7h64ZqUf
1•beeburrt•17m ago•0 comments

Show HN: I made a tool to fix chaotic handovers and slow onboarding

https://www.skillpasspro.com/en/
1•kevinbaur•17m ago•1 comments

ChatGPT's Biggest Foe: Poetry

https://nautil.us/chatgpts-biggest-foe-poetry-1252100/
1•dnetesn•18m ago•0 comments

Show HN: AI-powered UX linter for web and mobile apps

https://onbeacon.ai/
1•mscarim•19m ago•0 comments

I've forked the Immich app for Android TV with AI and it's amazing

https://javipas.com/how-i-built-the-perfect-immich-tv-app-without-writing-a-single-line-of-code/
1•javipas•19m ago•1 comments

The C3PO Bug in Lego Star Wars: The Complete Saga

https://frederikbraun.de/lego-star-wars-complete-saga-c3po-bug.html
2•speckx•19m ago•0 comments

Show HN: Zod-file – TypeScript type-safe file persistence

https://github.com/loderunner/zod-file
1•loderunnr•20m ago•0 comments

Ask HN: Is ChatGPT Experiencing a Degradation?

1•spIrr•21m ago•1 comments

Same Product, Same Store, but on Instacart, Prices Might Differ

https://www.nytimes.com/2025/12/09/business/instacart-algorithmic-pricing.html
1•subhero•22m ago•1 comments

RealGen: Photorealistic Text-to-Image Generation via Detector-Guided Rewards

https://yejy53.github.io/RealGen/
1•doener•23m ago•0 comments

Did Hitler really Have a 'Micropenis'?

https://www.theguardian.com/tv-and-radio/2025/nov/13/did-hitler-really-have-a-micropenis-hitlers-...
1•wjSgoWPm5bWAhXB•23m ago•0 comments