Something big is happening (97 points, 77 comments)
The LLM couldn’t be enhanced by dynamic training because that’s already what humans do. It’s by design that their “guidelines” are fixed.
\ | /
--(_) --
.' . '.
/ . . \
| . |
\ . . /
'. . .'
'v'> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.
A self-driving car with a vision-language-action model inside buzzes by.
> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.
A large multimodal model listens to your request and produces a picture.
> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!
GPT‑5.3‑Codex helps debug its own training.
Vision-action maybe. Jamming language in the middle there is an indicator you should run for public office.
Doesn't this support the author's point? It still required humans.
And the details involved in closing some of the rest of that loop do not seem THAT complicated.
No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.
Fundamentally, I think that many problems in white-collar life are text comprehension problems or basic automation problems. Further, they often don't even need to be done particularly well. For example, we've long decided that it's OK for customer support to suck, and LLMs are now an upgrade over an overseas call center worker who must follow a rigid script.
So yeah, LLMs can be quite useful and will be used more and more. But this is also not the discourse we're having on HN. Every day, there's some AGI marketing headline, including one at #1 right now from OpenAI.
That’s very big.
AlphaZero was a special/unusual case, I would say an outlier.
FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.
I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.
This article has a confrontational title, but the point made here seems to not be incompatible with the original...the author is confronting the FUD directly, which is understandable but perhaps not quite as useful as refuting the core thesis, which is that something you cannot afford to ignore is happening.
In fact, both these people seem to be in agreement that you need to keep an eye on this ball, they just have a "panic" versus "don't panic" framing. Should you panic in an emergency? Research says no [2].
[0] https://shumer.dev/something-big-is-happening
[1] https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt - note the original author is an AI founder
Something Big Is Coming (Annotated by Ed Zitron) [pdf] - https://news.ycombinator.com/item?id=47007991 - Feb 2026 (31 comments)
Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (74 comments)
HN gets tons of thought-piece submissions about AI so we try to keep the bar relatively high (notice that word 'relatively'. I'm not saying it's as high as all that!) If discussion here is somewhat uncorrelated with discussion on the rest of the internet, that's good, at least for this kind of content.
mchusma•1h ago
But I have personally repeatedly used AI instead of humans across domains.
AI displacement isn’t a prediction. It’s here.
DangitBobby•1h ago