frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Root System Drawings

https://images.wur.nl/digital/collection/coll13/search
191•bookofjoe•7h ago•31 comments

Tinnitus Neuromodulator

https://mynoise.net/NoiseMachines/neuromodulationTonesGenerator.php
166•gjvc•4h ago•109 comments

Chen-Ning Yang, Nobel laureate, dies at 103

https://www.chinadaily.com.cn/a/202510/18/WS68f3170ea310f735438b5bf2.html
32•nhatcher•15h ago•11 comments

Flowistry: An IDE plugin for Rust that focuses on relevant code

https://github.com/willcrichton/flowistry
98•Bogdanp•6h ago•16 comments

Is Postgres read heavy or write heavy?

https://www.crunchydata.com/blog/is-postgres-read-heavy-or-write-heavy-and-why-should-you-care
26•soheilpro•1d ago•0 comments

Who invented deep residual learning?

https://people.idsia.ch/~juergen/who-invented-residual-neural-networks.html
52•timlod•5d ago•14 comments

What Dynamic Typing Is For

https://unplannedobsolescence.com/blog/what-dynamic-typing-is-for/
44•hit8run•4d ago•27 comments

./watch

https://dotslashwatch.com/
269•shrx•11h ago•75 comments

Solution to CIA’s kryptos sculpture is found in Smithsonian vault

https://www.nytimes.com/2025/10/16/science/kryptos-cia-solution-sanborn-auction.html
55•elahieh•2d ago•14 comments

Using CUE to unify IoT sensor data

https://aran.dev/posts/cue/using-cue-to-unify-iot-sensor-data/
16•mvdan•7h ago•0 comments

How to sequence your DNA for <$2k

https://maxlangenkamp.substack.com/p/how-to-sequence-your-dna-for-2k
5•yichab0d•59m ago•0 comments

Secret diplomatic message deciphered after 350 years

https://www.nationalarchives.gov.uk/explore-the-collection/the-collection-blog/secret-diplomatic-...
42•robin_reala•2d ago•4 comments

Ripgrep 15.0

https://github.com/BurntSushi/ripgrep/releases/tag/15.0.0
270•robin_reala•7h ago•65 comments

Liva AI (YC S25) Is Hiring

https://www.ycombinator.com/companies/liva-ai/jobs/inrUYH9-founding-engineer
1•ashlleymo•3h ago

K8s with 1M nodes

https://bchess.github.io/k8s-1m/
44•denysvitali•1d ago•7 comments

Titan submersible’s $62 SanDisk memory card found undamaged at wreckage site

https://www.tomshardware.com/pc-components/microsd-cards/tragic-oceangate-titan-submersibles-usd6...
56•WithinReason•1d ago•19 comments

New Work by Gary Larson

https://www.thefarside.com/new-stuff
460•jkestner•23h ago•120 comments

Why the open social web matters now

https://werd.io/why-the-open-social-web-matters-now/
35•benwerd•4d ago•2 comments

Show HN: The Shape of YouTube

https://soy.leg.ovh/
14•hide_on_bush•6d ago•6 comments

Coral NPU: A full-stack platform for Edge AI

https://research.google/blog/coral-npu-a-full-stack-platform-for-edge-ai/
68•LER0ever•2d ago•7 comments

SQL Anti-Patterns

https://datamethods.substack.com/p/sql-anti-patterns-you-should-avoid
184•zekrom•8h ago•130 comments

Picturing Mathematics

https://mathenchant.wordpress.com/2025/10/18/picturing-mathematics/
23•jamespropp•5h ago•0 comments

Attention is a luxury good

https://seths.blog/2025/10/attention-is-a-luxury-good/
126•herbertl•5h ago•73 comments

Ruby Blocks

https://tech.stonecharioteer.com/posts/2025/ruby-blocks/
161•stonecharioteer•4d ago•90 comments

Lux: A luxurious package manager for Lua

https://github.com/lumen-oss/lux
46•Lyngbakr•8h ago•12 comments

Carbonized 1,300-Year-Old Bread Loaves Unearthed in Turkey

https://ancientist.com/1300-year-old-communion-bread-unearthed-in-karaman-a-loaf-for-the-farmer-c...
4•ilamont•5d ago•1 comments

Our Paint – a featureless but programmable painting program

https://www.WellObserve.com/OurPaint/index_en.html
31•ksymph•6d ago•5 comments

Fast calculation of the distance to cubic Bezier curves on the GPU

https://blog.pkh.me/p/46-fast-calculation-of-the-distance-to-cubic-bezier-curves-on-the-gpu.html
102•ux•11h ago•22 comments

When you opened a screen shot of a video in Paint, the video was playing in it

https://devblogs.microsoft.com/oldnewthing/20251014-00/?p=111681
79•birdculture•2d ago•8 comments

The Hunt for the World's Oldest Story

https://www.newyorker.com/magazine/2025/10/20/review-the-roots-of-ancient-mythology-books
9•pseudolus•5d ago•2 comments
Open in hackernews

Game over. AGI is not imminent, and LLMs are not the royal road to getting there

https://garymarcus.substack.com/p/the-last-few-months-have-been-devastating
113•FromTheArchives•7h ago

Comments

Lionga•7h ago
A decade to AGI is still insanely optimistic, borderline delusional. A century or never might is way more realistic timeline.
tobias3•7h ago
We cannot even guess what the timeline would be. This is why it is insane to invest in it and expect a positive return in usual investing horizons.
bogzz•7h ago
ORCL seems to be the one who'll be left without a chair when the music stops.
Cornbilly•7h ago
Delusion seems to be the main product of SV these days.
riskable•7h ago
I don't think it's possible to say how far away or "never" it is. All we know is that LLMs cannot become AGI.

My prediction: AGI will come from a strange place. An interesting algorithm everyone already knew about that gets applied in a new way. Probably discovered by accident because someone—who has no idea what they're doing—tried to force an LLM to do something stupid in their code and yet somehow, it worked.

What wouldn't surprise me: A novel application of the Archimedes principle or the Brazil nut effect. You might be thinking, "What TF to those have to do with AI‽ LOL!" and you're probably right... Or are you?

ACCount37•6h ago
Why not?

What's the fundamental, absolutely insurmountable capability gap between the two? What is it that can't be bridged with architectural tweaks, better scaffolding or better training?

I see a lot of people make this "LLMs ABSOLUTELY CANNOT hit AGI" assumption, and it never seems to be backed by anything at all.

fnord77•6h ago
The view that LLMs alone are insufficient for AGI is based on their fundamental mathematical architecture
ACCount37•1h ago
What is the exact limitation imposed by "their fundamental mathematical architecture"? I'm not aware of any such thing.
OJFord•6h ago
> All we know is that LLMs cannot become AGI.

A part of it, perhaps: I think of it like 'computer vision'; LLMs offer 'computer speech/language' as it were. But not a 'general intelligence' of motives and reasoning, 'just' an output. At the moment we have that output hooked up to a model that has data from the internet and books etc. in excess of what's required for convincing language, so it itself is what drives the content.

I think the future will be some other system for data and reasoning and 'general intelligence', that then uses a smaller language model for output in a human-understood form.

red75prime•6h ago
I guess AGI will come from identifying and grinding away limitations of transformers (and/or diffusion networks). As it has been with almost all technologies. Then someone (or someAI) will probably find something unexpected (but less unexpected at this stage) and more suitable for general intelligence.
tim333•1h ago
Not the current ones but that doesn't mean you can't build something that uses the ideas from LLMs but adds to them.
XorNot•6h ago
I agree, though I would argue the issue is closer to no one's trying to measure how close we should be.

If you take the materialist view - and in this business you obviously have to - then the question is "do we have human level computational capacity with some level of real-time learning ability?"

So the yardstick we need is (1): what actually is the computational capacity of the human brain? (i.e. the brain obviously doesn't simulate it's own neurons, so what is a neuron doing and how does that map to compute operations?) and then (2) - is any computer system running an AI model plausibly working within those margins or exceeding them?

With the second part being: and can that system implement the sort of dynamic operations which humans obviously must, unaided? i.e. when I learn something new I might go through a lot of repetitions, but I don't just stop being able to talk about a subject for months while ingesting sum content of the internet to rebuild my model.

naasking•6h ago
> do we have human level computational capacity with some level of real-time learning ability

We don't build airplanes by mimicking birds. AGIs computational capacity won't be directly comparable to human computation capacity that was formed by messy, heuristic evolution.

You're right that there is some core nugget in there that needs to be replicated though, and it will probably be by accident, as with most inventions.

Hardware is still progressing exponentially, and performance improvements from model and algorithmic improvements have been outpacing hardware progress for at least a decade. The idea that AGI will take a century or more is laughable. Now that's borderline deluded.

ACCount37•7h ago
I don't think Gary Marcus had anything of value to say about AI at any point in what, the past 2 decades? You can replace him with a sign that has "the current AI approach is DOOMED" written on it at no loss of function.

Symbolic AI has died a miserable death, and he never recovered from it.

an0malous•7h ago
That sign would be more valuable than the one Sam Altman’s been holding saying “AGI in 2025”
p1esk•6h ago
Did he say that?
omnicognate•6h ago
https://youtu.be/xXCBz_8hM9w?si=KjaolnjTJd2Lz82k

46:12

> What are you excited about in 2025? What's to come?

> AGI. Excited for that.

MattRix•6h ago
That was a joke, how do people not understand that?
omnicognate•6h ago
Because it wasn't. It was said firmly, with a straight face in response to a simple, serious question, after an interview in which he claimed OpenAI know exactly how to get to AGI and it's just a matter of execution.

Of all the possible defenses of this remark, I didn't expect that one.

MattRix•6h ago
If you know how sarcasm works, that’s exactly what it looks like. Afterwards he immediately pondered the question and then gave his real answer (about looking forward to having a kid).

Besides all that, think about it: if this wasn’t a joke, then why has he never said the same thing again?

an0malous•5h ago
He has, many times. And when he doesn’t say it directly he insinuates it.

> OpenAI CEO Sam Altman wrote in January: “We are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.”

https://fortune.com/2025/08/25/tech-agi-hype-vibe-shift-supe...

Oh it’s all just jokes though, jokes that Sam has profited enormously from.

burnerzzzzz•4h ago
calm down, in some cultures (american) sarcasm must be clearly telegraphed. In other, it would ruin the joke (british)
tim333•1h ago
I took that as he's excited to be working towards AGI. The one word answer is fairly ambiguous. For context in the same interview he says:

>... from here to building an AGI will still take a huge amount of work there are some known unknowns but I think we basically know what to go what to go do and it'll take a while, it'll be hard but that's tremendously exciting... (38:59)

which to me doesn't sound like we'll be done within a year.

MattRix•6h ago
Sam Altman never said that, except as a joke. An interviewer asked “what are you excited for in 2025” and he said “AGI”, then said “having a kid” as his real answer.

It’s also worth noting that even when he does talk about AGI, he makes a strong distinction between AGI (human level) and ASI (super human level) intelligence. Many people in these kind of discussions seem to conflate those two as being the same.

an0malous•5h ago
He didn’t say it was a joke and has benefited to the tune of hundreds of billions of dollars from the prevailing belief that AGI is imminent, so it seems terribly convenient and charitable to interpret it as a joke. Should we have given the same charity to Elizabeth Holmes and Sam Bankman Fried when they reported their technological capabilities and cash balance? “Oh it’s not fraud that they materially benefited from, it’s just a joke.”
dogma1138•6h ago
Symbolic AI didn’t die tho, it was just merged with deep learning either as complementary from the get go e.g. AlphaGo which uses Symbolic AI to feed a deep neural network or now as a post processing / interventionary technique for guiding and optimizing outputs of LLM, human in the loop and MoR are very much Symbolic AI techniques.
bbor•6h ago
Exactly this, well said. Symbolic AI works so well that we don’t really think of it as AI anymore!

I know I for one was shocked to take my first AI course in undergrad and discover that it was mostly graph search algorithms… To say the least, those are still helpful in systems built around LLMs.

Which, of course, is what makes Mr. Marcus so painfully wrong!

imtringued•6h ago
I'm not sure it's true that symbolic AI is dead, but I think Gary Marcus style symbolic AI is. His "next decade in AI" paper doesn't even mention symbolic regression. For those who don't know, linear regression tries to find a linear fit against a dataset. Quadratic regression a quadratic fit. And so on. Symbolic regression tries to find a symbolic expression that gives you an accurate data fit.

Symbolic regression has an extremely obvious advantage over neural networks, which is that it learns parameters and architecture simultaneously. Having the correct architecture means that the generalization power is greater and that the cost of evaluation due to redundant parameters is lower. The crippling downside is that the search space is so vast that it is only applicable to toy problems.

But Gary Marcus is in favour of hybrid architectures, so what would that look like? In the SymbolNet paper, they have essentially decided to keep the overall neural network architecture, but replaced the activation functions with functions that take multiple inputs aka symbols. The network can then be pruned down into a symbolic expression.

That in itself is actually a pretty damning blow to Gary Marcus, because now you have most of the benefits of symbolic AI with only a tiny vestige of investment into it.

What this tells us is that fixed activation functions with a single input appear to be a no go, but nobody ever said that biological neurons implement sigmoid, relu, etc in the first place. It's possible that the spike encoding already acts as a mini symbolic regressor and gives each neuron its own activation function equivalent.

The "Neuronal arithmetic" paper has shown that biological neurons can not only calculate sums (ANNs can do this), but also multiply their inputs, which is something the activation functions in artificial neural networks cannot do. LLMs use gating in their MLPs and attention to explicitly model multiplication.

There is also the fact that biological neurons form loops. The memory cells in LSTMs perform a similar function, but in the brain there can be memory cells everywhere whereas in a fixed architecture like LSTMs they are only where the designer put them.

It seems as if the problem with neural networks is that they're too static and inflexible and contain too much influence from human designers.

tim333•1h ago
You also need something on Gary Marcus's sign about how wonderful Gary Marcus is like

>Gary Marcus tried to tell you for years that this moment would come for years. Do consider reading his 2020 article...

I can't say I agree with the sentiment "Game over". The game of trying to develop AI is very much on.

AbrahamParangi•7h ago
It is worth noting that Gary Marcus has been declaring the newfound futility of AI every couple months for the last 2-3 years or so.

Meanwhile, the technology continues to progress. The level of psychological self-defense is unironically more interesting than what he has to say.

Quite a wide variety of people find AI deeply ego threatening to the point of being brainwormed into spouting absolute nonsense, but why?

sailingparrot•7h ago
> Quite a wide variety of people find AI deeply ego threatening to the point of being brainwormed into spouting absolute nonsense, but why?

He is not brainwashed, this just happens to be his business. What happens to Gary Marcus if Gary Marcus stops talking about how LLM are worthless? He just disappears. No one ever interviews him for his general thoughts on ML, or to discuss his (nonexistent) research. His only clame to fame is being the loudest contrarian person in the LLM world so he has to keep doing that or accept to become irrelevant.

Slight tangent but this is a recurring pattern in fringe belief, e.g. prominent flat earther who long ago accepted earth is not flat but can’t stop the act as all their friends and incomes are tied to that belief.

Not to say that believing LLM won’t lead to AGI is fringe, but it does show the danger (and benefits I guess) to tying your entire identity to a specific belief.

brazukadev•7h ago
> Meanwhile, the technology continues to progress

And at the same time, his predictions are becoming more and more real

lairv•6h ago
https://nautil.us/deep-learning-is-hitting-a-wall-238440/

Gary Marcus said that Deep Learning was hitting a wall 1 month before the release of DALLE 2, 6 months before the release of ChatGPT and 1 year before GPT4, arguably 3 of the biggest milestones in Deep Learning

brazukadev•6h ago
Sam Altman said GPT-3 was dangerous and openai should be responsible for saving the humanity.
CamperBob2•4h ago
Worth pointing out that no one who doesn't work at a frontier lab has ever seen a completely un-nerfed, un-bowdlerized AI model.
brazukadev•1h ago
But we know that ChatGPT 5 is better than anything un-nerfed, un-bowdlerized from 2 years ago. And is not impressive.
ACCount37•55m ago
There are some base models available to the public today. Not on "end of 2025 frontier run" scale, but a few of them are definitely larger and better than GPT-3. There are some uses for things like that.

Not that the limits of GPT-3 were well understood at the time.

We really had no good grasp of how dangerous or safe something like that would - and whether there are some subtle tipping point that could propel something like GPT-3 all the way to AGI and beyond.

Knowing what we know now? Yeah, they could have released GPT-3 base model and nothing bad would have happened. But they didn't know that back then.

ACCount37•6h ago
"AI effect" is long known and pretty well documented.

When AI beat humans at chess, it didn't result in humans revising their idea of the capabilities of machine intelligence upwards. It resulted in humans revising their notion of how much intelligence is required to play chess at world champion level downwards, and by a lot.

Clearly, there's some sort of psychological defense mechanism in play. First, we see "AI could never do X". Then an AI does X, and the sentiment flips to "X has never required any intelligence in the first place".

goalieca•6h ago
I think it’s fairly safe to say that chess can be modelled as a math problem and does not require _general_ intelligence to solve.
ACCount37•6h ago
I think it's fairly safe to say that "X can be modeled as a math problem and does not require _general_ intelligence to solve" for any X that general intelligence can solve. Some math problems are just more complicated than others.
tim333•1h ago
It goes back further. Here he is in 2012:

>Norvig is clearly very interested in seeing what Hinton could come up with. But even Norvig didn’t see how you could build a machine that could understand stories using deep learning alone. https://www.newyorker.com/news/news-desk/is-deep-learning-a-...

dzink•7h ago
Why the padding self on the back after a few opinions? You have large tech and government players and then you have regular people.

1. For large players: AGI is a mission worth perusing at the cost of all existing profit (you won’t pay taxes today, the stock market values you on revenue anyway, and if you succeed you can control all people and means of production).

2. For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others. If current AI is sufficient to meaningfully reduce the employment market, AGI doesn’t matter much to regular people. Their life is altered and many will be looking for manual work until AI enters that too.

3. The AI vendors are running at tremendous expense right now and the sources of liquidity for billions and trillions are very very few. It is possible a black swan event in the markets causes an abrupt end to liquidity and thus forces AI providers into pricing that excludes many existing lower-end users. That train should not be taken for granted.

4. It is also possible WebGPU and other similar scale-ai-accross-devices efforts succeed and you get much more compute unlocked to replace Advertising.

Serious question: Who in HN is actually looking forward to AGI existing?

the_arun•7h ago
I was content even without AI. I’m good with whatever we have today as long we use them to change the life in a positive way.
prox•7h ago
I am not against AGI, just the method and the players we have getting there. Instead of a curiosity to find intelligence, we just have rabid managers and derailed billionaires funding a race to … what? I don’t think even they know beyond a few hype words in their vocab and a buzz to bullshit powerpoint presentation.
pixl97•6h ago
This is just the world we live in now for everything. Remember .com? Web3? Crypto? And now AI. Hell, really going back in the past you see dumb shit like this happening with tulips.

We're lucky to have managed to progress in spite of how greedy we are.

pohl•6h ago
The luck may be coming to an end. The last two have serious misuses. Crypto’s best killer apps thus far have been rug-pulls, evading governance of criminal financial activity and corruption. AGI would, today, be likely be called into service to tighten authoritarian grips.
brazukadev•7h ago
> Serious question: Who in HN is actually looking forward to AGI existing?

90% of the last 12 batches of YC founders would love to believe they are pursuing AGI with their crappy ChatGPT wrapper, agent framework, observability platform, etc.

mapontosevenths•6h ago
> > Serious question: Who in HN is actually looking forward to AGI existing?

I am.

It's he only serious answer to the question of space exploration. Rockets filled with squishy meat were never going to accomplish anything serious, unless we find a way of beating the speed of light.

Further, humanities greatest weakness is that we can't plan anything long-term. Our flesh decays too rapidly and our species is one of perpetual noobs. Fields are becoming too complex to master in a single lifetime. A decent super-intelligence can not only survive indefinitely, it can plan accordingly, and it can master fields that are too complex to fit inside a single human skull.

Sometimes I wonder if humanity wasn't just evolutions way of building AI's.

card_zero•6h ago
I don't agree with the point about "perpetual noobs". Fields that are too broad to fit in a single mind in a lifetime need to be made deeper, that is, better explained. If a field only gets more expansive and intricate, we're doing it wrong.

Still, 130+ years of wisdom would have to be worth something, I can't say I dislike the prospect.

Noaidi•6h ago
There are a lot of hopeful assumptions in the statement. Who’s to say that if AGI is achieved that it would want us to know how to go faster than the speed of light? you’re assuming that your wisdom and your plans would be AGI’s wisdom and plans. It might end up, just locking us here down on earth, sending us back to a more balanced primitive life, and killing off a mass amount of humans in order to achieve ecological balance on the Earth so humanity can survive without having to leave the planet.. Note that that’s not something I am advocating. I’m just saying it’s a possibility.
card_zero•5h ago
Well, you're assuming that AGIs would see themselves as belonging to a separate faction, like aliens, instead of seeing themselves as being inorganic humans. They'd presumably be educated by human parents in human culture. We do tend to form groups based on trivial details, though, so I'm sure they'd be a subculture. But they wouldn't automatically gain any superior knowledge anyway, even if they did feel resentment for fleshy types. Being made of silicon (or whatever turns out to work) doesn't grant you knowledge of FTL, why would it?
mapontosevenths•1h ago
I actually assume that we'll never Crack FTL

The answer isnt AI'S solving the unsolvable for us. The answer is admitting that large, fragile humans straped to big bombs isn't an answer and serves no purpose. Small power efficient AGI's can boldy go where it is impossible for us go and report back.

Maybe we'll eventually crack human upload, which would also work.

ACCount37•6h ago
It's kind of ironic - that this generation of LLMs has worse executive functioning than humans do. Turns out the pre-training data doesn't really teach them that.

But AIs improve, as technology tends to. Humans? Well...

bossyTeacher•5h ago
> It's he only serious answer to the question of space exploration.

It is. But the world's wealthiest are not pouring billions so that human can develop better space exploration tech. The goal is making more money

card_zero•6h ago
I'm looking forward to artificial people existing. I don't see how they'd be a money-spinner, unless mind uploading is also developed and so they can be used for life extension. The LLM vendors have no relevance to AGI.
barrell•6h ago
I’m not convinced the current capabilities have impacted all that many people. I think the economy is much more responsible for the lack of jobs than “replacement with AI”, and most businesses have not seen returns on AI.

There is a tiny, tiny, tiny fraction of people who I would believe have been seriously impacted by AI.

Most regular people couldn’t care less about it, and the only regular people I know who do care are the ones actively boycotting it.

bossyTeacher•5h ago
> For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others

This statement sums up the tech centric bubble HN lives in. Your average former, shop assistant, fisherman or wood worker isn't likely to see significant life improevments from the transformer tech deployed until now.

stingraycharles•7h ago
This is not a really valuable article. The Apple paper was widely considered as a “well, duh” paper, GPT5 being underwhelming seems to be mostly a cost cutting / supply can’t keep up issue, and the others are just mainly expert opinions.

To be clear, I am definitely an AGI skeptic, and I very much believe that our current techniques of neural networks on GPUs is extremely inefficient, but this article doesn’t really add a lot to this discussion; it seems to self congratulate on the insights by a few others.

an0malous•6h ago
I don’t think either of your first two statements are accurate, what is your support for those claims?
p1esk•6h ago
GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago. I really doubted most GPT4 limitations will be resolved so quickly.

If they manage a similar quality jump with GPT6, it will probably meet most reasonable definitions of AGI.

dns_snek•5h ago
> GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago.

Cool story. In my experience they're still on the same order of magnitude of usefulness as the original Copilot.

Every few months I read about these fantastic ground-breaking improvements and fire up whatever the trendiest toolchain is (most recently Claude Code and Cursor) and walk away less disappointed than last time, but ultimately still disappointed.

On simple tasks it doesn't save me any time but on more complex tasks it always makes a huge mess unless I mentor it like a really junior coworker. But if I do that I'm not saving any time and I end up with lower quality, poorly organized code that contains more defects than if I did it myself.

p1esk•2h ago
You probably just forgot how bad GPT4 was. It was failing half of complicated tasks or questions I asked. GPT5 has roughly 90% success rate for me.
dns_snek•35m ago
"Success rate" is a useless metric unless we have a shared definition of what success means. Either we have a categorically different idea of what constitutes a "complicated" task or we have categorically different standards of minimum viable quality.

Current SOTA agents are great at gluing together pre-made components but they break down as soon as I try to treat them like mid-level developers by asking them to implement a cross-cutting feature in an idiomatic way. Without junior-level hand-holding it far too often generates a half-broken implementation that looks right at first glance, supported by about 3 dozen tests, most of which are just useless bloat that doesn't test the right things.

Sometimes it starts looping on failing tests until it eventually gives up, confidently concluding that the implementation is complete with production-quality code, with whopping 90% test success rate (fabricating a lazy explanation for why the failing tests are outside of its control).

Even when the code does work and the tests pass it's poorly designed and poorly structured. There's very little attention paid to future maintainability, even with planning ahead of time.

GPT-5, Claude 4.5 Sonnet, it's all the same. "Thinking" or no thinking.

cs702•7h ago
I wouldn't call it "game over." That's too harsh. The truth is, we don't know.

Sure, there's a ridiculous amount of hype, fueled by greed and FOMO, often justified by cargo-cultish science, but... progress in AI seems inevitable, because we have the human brain as physical proof that it's possible to build an intelligent machine with 100's of trillions of interconnections between neurons that consumes only about as much energy as an incandescent light bulb.

Today's largest AI models are still tiny in comparison, with only 1-2 trillion interconnections between neurons, each interconnection's weight specified by a parameter value. And these tiny AI models we have today consume many orders of magnitude more energy than a human brain. We have a long way to go, but we have proof that a human-brain equivalent is physically possible.

The timing of progress is of course unpredictable. Maybe we will need new breakthroughs. Maybe not. No one knows for sure. In any case, breakthroughs don't come along on a predictable schedule. It's possible we will go through a period of stagnation that lasts months, or even years. We cannot rule out another "AI winter" just because we don't want one.

Even if progress is punctuated by high peaks and deep valleys, I believe we'll get there, eventually.

ufmace•6h ago
I mostly agree, though I actually wonder if the energy difference is as big as you say. Yeah the big LLM company datacenters consume tremendous amounts of power, but that's mostly for either training or serving ~millions of requests at once. I wonder what the actual net power consumption is for a single machine doing about as much work as a single ordinary person could do with just enough hardware to run the necessary models. Or what the average amount of energy for a single set of interactions is at one of the big shared datacenters - they reportedly have a lot of optimizations to fit more requests on the same hardware. I think it might be only one order of magnitude greater than a human brain. Maybe actually pretty close to equal if you compare total energy used against work being done, since the human brain needs to be kept alive all the time, but can only effectively do work for a limited part of the day.
cs702•5h ago
Great points. I was thinking only about the noggin inside our heads, without considering all the infrastructure it relies on for support!
Spivak•7h ago
Honestly, thank god. I hope we get more of this and it penetrates the zeitgeist so we can finally bring non-technical people back to earth. Folks, even on HN, are losing their damn minds over this tech. The writing has been on the wall for a while that there's not another breakthrough hiding in our current methodology. We invented the computer from Star Trek—that's an insane accomplishment, literally science fiction become reality. Maybe we can finally sit with that for a while instead of chasing the dragon. As we scale these AIs we're going to make a smarter Computer, we're not going to suddenly have Data. And that avenue is worth pursuing on its own.
topranks•6h ago
The computer from Star Trek is always correct though.

We created something close, and yes that is amazing, but it’s highly fallible.

raws•7h ago
I feel like even a human would also fail if given all data results but those for x and then get tested on x just as the function results differ from previously observed behavior. Is it not more interesting to observe how the model(human or not) incorporates the new data to better match reality in the case of distribution shift or other irregular distributions and others
_fat_santa•6h ago
While Gary is very bearish on AI, I think there's some truth to his claims here though I disagree with how he got there. The problem I see with AI and AGI is not so much a technical problem as an economic one.

If we keep down our current trajectory of pouring billions on top of billions into AI then yes I think it would be plausible that in the next 10-20 years we will have a class of models that are "pseudo-AGI", that is we may not achieve true AGI but the models are going to be so good that it could well be considered AGI in many use cases.

But the problem I see is that this will require exponential growth and exponential spending and the wheels are already starting to catch fire. Currently we see many circular investments and unfortunately I see it as the beginning of the AI bubble bursting. The root of the issue is simply that these AI companies are spending 10x-100x or more on research than they bring in with revenue, OpenAI is spending ~$300B on AI training and infra while their revenue is ~$12B. At some point the money and patience from investors is going to run out and that is going to happen long before we reach AGI.

And I have to hand it to Sam Altman and others in the space that made the audacious bet that they could get to AGI before the music stops but from where I'm standing the song is about to come to an end and AGI is still very much in the future. Once the VC dollars dry up the timeline for AGI will likely get pushed another 20-30 years and that's assuming that there aren't other insurmountable technical hurdles along the way.

Havoc•6h ago
The karpathy interview struck me as fairly upbeat despite the extended 10 year timeline. That's really not a long time on something with "changes everything" potential...as proper working agents would be
arnaudsm•6h ago
Genuine question : why are hyperscalers like OpenAI and Oracle raising hundreds of billions ? Isn't their current infra enough ?

Naive napkin math : a GB200 NVL72 is 3M$, can serve ~7000 concurrent users of gpt4o (rumored to be 1400B A200B), and ChatGPT has ~10M concurrent peak users. That's only ~4B$ of infra.

Are they trying to brute-force AGI with larger models, knowing that gpt4.5 failed at this, and deepseek & qwen3 proved small MoE can reach frontier performance ? Or is my math 2 orders of magnitude off ?

Noaidi•6h ago
They are raising the money because they can. While these businesses may go bankrupt, many people who ran these businesses will make hundreds of millions of dollars.

Either that or AGI is not the goal, rather it’s they want to function for, and profit off of , a surveillance state that might be much more valuable in the short term.

ACCount37•6h ago
As a rule: inference is very profitable, frontier R&D is the money pit.

They need the money to keep pushing the envelope and building better AIs. And the better their AIs get, the more infra they'll need to keep up with the inference demand.

GPT-4.5's issue was that it wasn't deployable at scale - unlike the more experimental reasoning models, which delivered better task-specific performance without demanding that much more compute.

Scale is inevitable though - we'll see production AIs reach the scale of GPT-4.5 pretty soon. Newer hardware like GB200 enables that kind of thing.

lazide•4h ago
Their valuation projection spreadsheets call for it. If they touch those spreadsheets, a bunch of other things break (including their ability to be super-duper-crazy-rich), so don’t touch them.
dsign•6h ago
I don't think AGI is imminent in a "few months"-inminent, but in a "few decades" imminent. Personally, I don't own stocks in any AI company, though I'll be affected if the bubble bursts because the world economy right now feels fragile.

But I'm hoping something good comes out of the real push to build more compute and to make it cheaper. Maybe a bunch of intrepid aficionados will use it to run biological simulations to make cats immortal, at which point I'll finally get a cat. And then I will be very happy.

erichocean•6h ago
If by AGI you mean "can do any economic task humans currently do", that is within the range of a "few months," though the rollout will be incremental since each economic task has to be independently taught, there are supply chain issues, and AI-first companients will need to be developed to take best advantage of it, etc.

But all of this is much closer than people seem to realize.

dns_snek•5h ago
How about we start with just one, like software development with its abundance of data? It's going to look far less silly for you when that one doesn't work out either.
erichocean•4h ago
Software development will be one of the first things AI labor does.

What is silly are people like you claiming it can't be done.

Your mistake is thinking that having "an abundance of data" is the bottleneck.

dns_snek•2h ago
Can you quote where I said that it can't be done? I only said it won't be done in a matter of months like you claimed.

> "can do any economic task humans currently do", that is within the range of a "few months,"

I think it's extremely unlikely that anything of the sort will be done by 2030 in any profession. I feel confident that I'll still have a job as a software developer in 2040 if I'm not in retirement by then.

Now the real question - how long until they can perform ANY "economic task" (or at least any office job)?

I don't think that's happening within our lifetimes. Certainly not within 20 years and to predict that this is coming within "months" isn't just silly, it's completely delusional.

> Your mistake is thinking that having "an abundance of data" is the bottleneck.

That, and the economics of building a billion-dollar supercomputer just to replace a junior software developer. And we're STILL just talking about software development.

How about any real engineering discipline, how many months before one of those jobs can be done by AI?

erichocean•2h ago
I'm comfortable letting history decide who is right. Thanks for your reply.
Noaidi•6h ago
I am horrified for what this might mean for the United States economy. I think all the investment in AI was that it was supposed to lead to this AGI and now that’s not gonna happen, what’s gonna happen to the investments that were made for the hopes of this being more than just slop generation?
naasking•6h ago
Even if you think LLMs are not the road forward, that does not and cannot entail that AGI is not imminent.
mellosouls•6h ago
As somebody who used to link to the occasional Marcus essay, this is a really poor "article" by a writer who has really declined to the point of boorishness. The contents here are just a list of talking points already mostly discussed on HN, so nothing new, and his over-familiar soapbox proclamations add nothing to the discourse.

Its not that he's wrong, I probably still have a great deal of sympathy with his position, but his approach is more suited to social media echo chambers than intelligent discussion.

I think it would be useful for him to take an extended break, and perhaps we could also do the same from him here.

hopelite•6h ago
I’m not sure an ad hominem assault is any different. You make proclamations without any evidence as if what you say has any more credibility than the next person. In fact, this response makes a reasonable person discount you.

Sure, it reads like some biased and coping, possibly even interested or paid hit-piece as if what happens can be changed by just being really negative about LLMs, but maybe consider taking your own advice there, kid; you know, an extended break.

mellosouls•6h ago
Please give an example of how we might criticise somebody's method of communication and general strong decline in useful contributions to debate (of the order of that of Marcus) without you complaining ad hominem.
watwut•6h ago
I dont understand why AGI would be something we should want.
bossyTeacher•5h ago
According to HN, AGI is desirable because humans will absolutely won't try to:

- Use it to dominate others

- Make themselves and a few others immortal or/and their descendants smarter

- Increase their financial or/and political power

- Cause irreversible damage to the global core ecosystems in their pursuit of the 3 goals above

tim333•1h ago
I'm kind of keen. Possibilities of semi unlimited stuff, diseases cured, immortality or something along those lines.
hiddencost•6h ago
Why do people keep up voting him? He's been saying the same thing for 30 years. His brain is melting.
anon291•6h ago
We are already at agi yet no one seems to have noticed. Given its limited sense perception that makes chatgpt et al limited to talking with a partially blind, partially dead, quadraplegic, it has demonstrated and continues to demonstrate above average intelligence. Not sure what more needs to happen.

Sure we don't have embodied AI. Maybe it's not reflective enough. Maybe you find it jarring. Literally none of those things matter

CamperBob2•3h ago
The models that achieve something like AGI will need object permanence as well as sensory input.

It'll happen, because there's no reason why it won't. But no one knows the day or the hour... least of all this Gary Marcus guy, whose record of being wrong about this stuff is more or less unblemished.

socketcluster•6h ago
TBH. I don't think we actually need AGI. It's not a win-win.. It's a civilization-altering double-edged sword with unclear consequences.

I'm quite satisfied with current LLM capabilities. Their lack of agency is actually a feature, not a bug.

An AGI would likely end up implementing some kind of global political agenda. IMO, the need to control things and move things in a specific, unified direction is a problem, not a solution.

With full agency, an AI would likely just take over the world and run it in ways which don't benefit anyone.

Agency manifests as thirst for power. Agency is man's inability to sit quietly in a room, by himself. This is a double-edged sword which becomes increasingly harmful once you run out of problems to solve... Then agency demands that new problems be invented.

Agency is not the same as consciousness or awareness. Too much agency can be dangerous.

We can't automate the meaning of life. Technology should facilitate us in pursuing what we individually decide to be meaningful. The individual must be given the freedom to decide their own purpose. If most individuals want to be used to fulfill some greater purpose (I.e. someone else's goals), so be it, but that should not be the compulsory plan for everyone.

dmix•6h ago
I didn't know serious technical people were taking the AGI thing seriously. I figured it was just an "aim for the stars" goal where you try to get a bunch of smart people and capital invested into an idea, and everyone would still be happy if we got 25% of the way there.
analognoise•6h ago
If our markets weren’t corrupt, everyone in the AI space would be bankrupt by now, and we could all wander down to the OpenAI fire sale and buy nice servers for pennies on the dollar.
dmix•6h ago
I'd take this more seriously if I didn't hear the same thing every other time there was a spike in VC investment. The last 5 times were the next dot com booms too.
hmokiguess•6h ago
sure, lots of things can be true at the same time without being mutually exclusive — the way I see it, it looks like there's great effort being made out there in how to characterize and propagate the collective understanding of tools — what's the goal there? is this a "call this something else please" type of work? in the spirit of metalinguistics, what do you call this branch of human nature and behaviour as it pertains to our evolution of knowledge and use of language to apply it as a species

like, we clearly are deriving some kind of value from the current AI as a product — are some researchers and scientists just unhappy that these companies are doing that by using marketing that doesn't comply with their world views?

does someone know of other similar parallels in history where perhaps the same happened? I'm sure we butchered words in different domains with complex meanings and I bet you some of these, looking back, are a bunch of nothing burgers

ChrisArchitect•3h ago
Related:

Andrej Karpathy — AGI is still a decade away

https://news.ycombinator.com/item?id=45619329

chriskanan•3h ago
I think we need to distinguish among kinds of AGI, as the term has become overloaded and redefined over time. I'd argue we need to retire the term and use more appropriate terminology to distinguish between economic automation and human-like synthetic minds. I wrote a post about this here: https://syntheticminds.substack.com/p/retiring-agi-two-paths...
wseqyrku•2h ago
The fact that anyone would think they would actually even consider releasing it when it's ready is amusing to me. Theres so much surveillance opportunities there. It is not going to be for the public.