frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anders Hejlsberg: A Craftsman of Computer Language

https://www.microsoft.com/en-us/behind-the-tech/anders-hejlsberg-a-craftsman-of-computer-language
1•andsoitis•1m ago•0 comments

China used three private companies to hack global telecoms

https://www.nbcnews.com/tech/security/china-used-three-private-companies-hack-global-telecoms-us-...
1•2OEH8eoCRo0•1m ago•0 comments

Good morning, Brit Xbox fans – ready to prove your age?

https://www.theregister.com/2025/08/28/xbox_online_safety_act/
1•rntn•2m ago•0 comments

The Power Is in the Network

https://morrisbrodersen.de/the-power-is-in-the-network/
1•todsacerdoti•3m ago•0 comments

Show HN: Txtos for LLMs – 60 SEC setup, long memory, boundary guard, MIT

https://github.com/onestardao/WFGY/blob/main/OS/README.md
2•tgrrr9111•5m ago•0 comments

Compilation vs. vectorization, search engine edition

https://jpountz.github.io/2025/08/28/compiled-vs-vectorized-search-engine-edition.html
1•mfiguiere•5m ago•0 comments

AI Music Playlist Creator

1•Chukwuebukaagm•6m ago•0 comments

Amtrak's flagship Acela trains get a long-awaited upgrade

https://www.npr.org/2025/08/28/nx-s1-5515654/amtrak-acela-trains-northeast-corridor-upgrade
2•voxadam•8m ago•0 comments

You Are All on the Hobbyists Maintainers' Turf Now (2024)

https://www.softwaremaxims.com/blog/open-source-hobbyists-turf
2•pabs3•8m ago•0 comments

Ask HN: Anyone working on bringing software back from US clouds?

2•sam_lowry_•9m ago•0 comments

Sometimes CPU cores are odd

https://anubis.techaro.lol/blog/2025/cpu-core-odd
1•todsacerdoti•11m ago•1 comments

An EM's Side Project Reached 1,800 GitHub Stars

https://newsletter.manager.dev/p/how-an-ems-side-project-reached-1800
1•AntonZ234•13m ago•0 comments

Ask HN: Which LinkedIn roles should I target for web scraping products/services?

1•vikramaruchamy•16m ago•0 comments

Nothing busted using professional photos as Phone 3 samples

https://www.theverge.com/report/766543/nothing-busted-using-fake-phone-3-photo-samples
2•k33l0r•16m ago•0 comments

Evaluation Code – GPT-5 on Multimodal Medical Reasoning

https://github.com/wangshansong1/GPT-5-Evaluation
2•Topfi•17m ago•0 comments

Hobbyist Maintainers with Thomas DePierre

https://opensourcesecurity.io/2025/2025-06-hobbyist-thomas-depierre/
1•pabs3•17m ago•0 comments

Project Showcase: Movuan

https://pine64.org/2025/08/27/august_2025_movuan/
2•wicket•19m ago•0 comments

In-App Browsers: The worst erosion of user choice you haven't heard of (2024)

https://open-web-advocacy.org/blog/in-app-browsers-the-worst-erosion-of-user-choice-you-havent-he...
7•wicket•20m ago•0 comments

AI audio generation/cleanup trained on my voice

1•jgrauman•21m ago•0 comments

Shakespeare can help us overcome loneliness in the digital age

https://scroll.in/article/1085865/how-shakespeare-can-help-us-overcome-loneliness-in-the-digital-age
2•akbarnama•22m ago•0 comments

Tips for installing Windows 98 in QEMU/UTM

https://sporks.space/2025/08/28/tips-for-installing-windows-98-in-qemu-utm/
1•zdw•22m ago•0 comments

Ask HN: What perfectly written monospace block text am I looking for?

4•meta-level•23m ago•0 comments

Anthropic's auto-clicking AI Chrome extension raises browser-hijacking concerns

https://arstechnica.com/information-technology/2025/08/new-ai-browser-agents-create-risks-if-site...
2•Bogdanp•23m ago•0 comments

How Cloudflare runs more AI models on fewer GPUs

https://blog.cloudflare.com/how-cloudflare-runs-more-ai-models-on-fewer-gpus/
2•eldridgea•25m ago•0 comments

World Train Travel Guide – The Man in Seat Sixty-One

https://www.seat61.com/index.html
2•mhb•25m ago•0 comments

AI Has Broken Hiring

https://brodzinski.com/2025/08/broken-ai-hiring.html
2•flail•28m ago•0 comments

A perfect symbiosis:planting vines other ways hot cities are creating cool space

https://www.theguardian.com/environment/2025/aug/28/planting-vines-and-other-ways-hot-cities-crea...
1•tocs3•28m ago•0 comments

A lower bound on the length of the shortest superpattern

https://warosu.org/sci/thread/S3751105#p3751197
1•gadders•29m ago•0 comments

The sisters "paradox" – counter-intuitive probability

https://blog.engora.com/2025/08/the-sisters-paradox-counter-intuitive.html
5•Vermin2000•32m ago•2 comments

Archaea produce peptidoglycan hydrolases that kill bacteria

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3003235
1•PaulHoule•33m ago•0 comments
Open in hackernews

AGI Overhyped?

12•brandozer111•6h ago
I keep hearing about how AGI is going to change the game and how it's going to immediately lead to a dangerous ASI. I even hear it from people I know who know almost nothing about how AI works. I am no pro myself, but I think having a fundamental understanding of how AI computes and stores information changes the perspective greatly.

Firstly, what is AGI? I've never heard a decent definition. Some say it's an AI that is as smart or as general as humans; some say it's an AI that's conscious. I don't see how it could be a "step" by these definitions because nothing actually changes between a regular AI and an AGI. To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff. Also, AI is already more capable than humans in almost every way except scale.

I personally think that it's just a hype word that Altman spammed to get more funding and interest in OpenAI. Even if you snapped your fingers and had the right training mechanism and the right networks etc for an AGI/ASI, I get the feeling it wouldn't even be smarter than people in the technical sense. AI already blows most people out of an IQ test at a fraction of the computational power of a brain, but that's because IQ tests compare competence to get relative intelligence; they don't test computation.

With that assumption, if AI can't be computationally stronger than humans, it's safe to say we won't have conscious computers for a while, but instead computers that act consciously instead. Does that mean that AI from hear on out is a waste that does nothing but take control from people while benefiting us the same? What is ChatGPT going to look like in 5 years? Am I just going to type in "do my taxes," and it's just going to do whatever it wants on my pc until my taxes are done? Why would I ever want that over a system designed to do my taxes correctly EVERY TIME by accountants? One thing I know about AI is it is slow as a mf, AI is great but you really have to think, we really just built a giant dictionary guy who's going to have the same problems human employees have.

I don't know just kind of spewing thoughts, I'd love to hear from people who are actual experts in designing these things.

Comments

brandozer111•6h ago
(-here) damn that one hurts
jfoster•6h ago
You're absolutely right that AGI and ASI are not well-defined, but we also need to recognize that they definitely are actual things even though we lack a definition.

For any AI concept, I think it might be instructive to consider the human equivalent.

For example, what is the definition of a genius? We don't have one, but genius certainly is a thing that exists. We might argue about who is/isn't a genius because of the missing definition, but we probably agree that exceptional people are a thing.

rkuodys•5h ago
While it's definitely interesting comparison, I would say the that key thing is that genius is limited in scope. What I mean by that is given any single genius - we might agree that he is one on subject X but not on subject Y.

With AGI it seems that expectation is to cover all the subjects. Which I think is more like god. You either believe it or you don't. Noone have definite proof of its existence or non-existence.

jillesvangurp•5h ago
Spot on. We struggle to define what we don't understand. And with AGI, that would include understanding ourselves. And when people start dragging in philosophy, religion, etc. you kind of know it's one of those things that definitely don't have much consensus.

That doesn't mean it's all nonsense. I think objectively we are getting quite a bit of emergent definitions that emerge from established fact and technology that kind of narrow this down. LLMs seem part of the solution on a path to some form of artificial intelligence that can keep up with us and that we might struggle to keep up with. But LLMs are not the whole solution. Though what they can't do keeps shifting. From glorified autocomplete to solving mathematical problems that were previously not solved. In the space of less than 3 years since the launch of chat GPT in November 2022. If the math wasn't solved before, there has to be a bit more to it than a glorified autocomplete.

Of course there are also counter points to that. But it at least challenges the notion that LLMs can't come up with new stuff. Even the notion that they can is deeply upsetting to some people because it challenges their world views. This debate is as much about that as it is about coming up with workable definitions.

LLMs are obviously lacking in a lot of ways. Perhaps the most obvious thing is that after their training is completed, they stop learning and adapting. They have no memory beyond their prompts and what they learned during training. A chat gpt conversation is just a large prompt with the entire history of the conversation and some clever engineering to load that into GPU memory relatively quickly. There are a lot of party tricks that involve elaborate system prompts and other workarounds.

The ability to remember and learn seems pretty fundamental to AGI, whatever that is. It's also a thing that doesn't sound like it's unsolvable. Some kind of indexed DB external to the AI for RAG kind of works but it does not solve the learning problem. And it shifts the problem to how to query and filter the data, which might not be optimal. And it's more similar to us using Google to find something than it is to us remembering information.

Also, it's not like we're particularly good at cramming large amounts of information down or learning. Learning is a slow process. And we kind of get set in our ways as we age. Meaning we're reluctant to learn more stuff and less capable of doing so. That suggests that even modest improvements might have dramatic results. LLMs + some short term memory and ability to learn might end up being a lot more useful.

I like duck typing in programming and that's also my mental model for AGIs. The Turing test is kind of obsolete. But if it quacks like duck and walks like a duck, it probably is a duck. Turing was onto something. Once we have a hard time telling apart the average person in society from an AI over days/weeks/years of interaction, we'll have AGIs. I think that's doable. But I'm not an ML engineer. A lot of those seem to believe it's doable too though.

The rest of society in the form of self appointed arm chair professors, religious leaders, eminent philosophers, and (too put this politely) lesser qualified individuals with strong opinions makes a lot of confused noises. But it does not generate a lot of definitions that have broad consensus.

edu•6h ago
One of the few things I’m sure about the current AI scene is that the marketing strategies of OpenAI, Anthropic, etc will be studied in business schools.
ngruhn•6h ago
> I've never heard a decent definition

I would say AGI can do every intellectual task a human can do. Maybe the raw cognitive power is already there but scale matters. If it can't work on a task for months on end, it's not AGI. Because humans can do it.

> AI already blows most people out of an IQ test at a fraction of the computational power of a brain

Does it? I thought the brain is much more energy efficient.

XorNot•6h ago
I think the more obvious problem is AI, given access to actuators, still couldn't untangle a ball of string successfully.

The systems appear smart because they're language models trained on quality tested text.

mg•5h ago
> it can't work on a task for months

    reply = AI.ask(task)
    while true:
        reply= AI.ask(f"""
            Improve the reply below:
            Task: {task}
            Reply: {reply}
        """")
        print(reply)
dudefeliciano•5h ago
have you tried this? honestly curious at what kind of hallucinations would come out of this
mg•2h ago
I'm using a version of this when coding. I feed the task and the reply back to the LLM once and ask if the task was accomplished well.

That is actually useful, as it often comes up with the same critique I have when reading through the commit. So it gives me more confidence that I did not miss any issues in the commit.

ben_w•5h ago
> Does it? I thought the brain is much more energy efficient.

It strongly depends on what you're trying to do with the AI. Consider "G" in "AGI" as being a dot-product over the quality of results for all the things (I_a) that some AI can do and the things (I_h) a human can do.

Stuff where the AI is competent enough to give an answer, it's often (but not always) lower energy than a human.

As an easy example of mediocre AI with unambiguously low power: think models that run on a high-end mac laptop, in the cases where such models produce good-enough answers and do so fast enough that it would have been like asking a human.

More ambiguously: If OpenAI's prices even closely resemble cost of electricity, then GPT-5-nano is similar to human energy cost if you include our bodies, beats us by a lot if you also account for humans having a 25% duty cycle when we're employed and a lifetime duty cycle of 10-11%.

Stuff where the AI isn't competent enough to give an answer… well, there's theoretical reasons to think you can make a trade-off for more competence by having it "think" for longer, but it's an exponential increase in runtime for linear performance improvements, so you very quickly reach a point where the AI far too energy intensive to bother with.

maxsavin•6h ago
The key word in AI is "artificial"
SCdF•6h ago
Part of the difficulty here is that new age AI / LLMs in discussion has a lot of similarities with crypto, in the sense that there is a lot of nonsense out there. Unlike crypto there is obviously some value, as opposed to none, so it's not all grift. But like you I find it hard to sort the difference between the two.

Fundamentally for me I can't get over the idea that extraordinary claims require extraordinary evidence, and so far I haven't really seen any evidence. Certainly not any that I would consider extraordinary.

It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.

I have not seen a convincing reason to think that the path that is being walked down ends up at actual intelligence, in the same way that there is no convincing reason to think the path magicians walk down ends up in actual magic.

preisschild•6h ago
> Unlike crypto there is obviously some value, as opposed to none

Even some cryptocurrencies like Monero have value if you consider "making digital transactions anonymously" to have value. I definitely do.

heresie-dabord•5h ago
> Unlike crypto there is obviously some value

To be fair, there is obviously some economic value in the fungibility of crypto-currency. The political and technical aspects are dubious.

> extraordinary claims require extraordinary evidence

Agreed, the only extraordinary achievement for this magic act so far is market capitalisation.

mitthrowaway2•5h ago
I'm not sure.

We never expected that there even could be a magic trick that came so close to mimicing human intelligence without actually being it. I think there's only so many ways that matter can be arranged to perform such tricks, we're putting lots of work into exploring that design space more thoroughly than ever before, and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.

SCdF•5h ago
> and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.

Right, so this is the extraordinary claims bit. I'm not an expert in any of the required fields, to be clear, but I'm just not seeing the path, and no one as yet has written a clear and concise explainer on it.

So my presumption, given past experience, is that it is hype designed to drive investment plus hopium, not something that is based on any actual reasoned thought process.

ben_w•4h ago
Sure, but evolution isn't an actual reasoned thought process, and it still managed us without even having humans as an explicit goal, we just popped out of the process by accident as a way to be effective at surviving in the wild.
ben_w•4h ago
> It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.

So, surgery?

As the stage magicians Penn and Teller (well, just Penn) said, stage magic is about making something very hard look easy, so much so that your audience simply doesn't even imagine the real effort you put into it.

Better analogy here would be asking if we're trying to summon cargo gods with hand-carved wooden "radios".

SCdF•3h ago
No, not surgery. Surgery wasn't gotten to by way of working really hard on a magic trick. I'm also reasonably sure surgery is not at a point where you can cut someone entirely in half and put them back together.

Since you brought up Penn and Teller, take the bullet catch. They are not actually catching a bullet. No matter how hard they work on perfecting their bullet catch trick, this will not allow them to catch a real bullet shot from a real gun in their real teeth. Working on the trick is not the journey where the end point is doing the actual thing you're representing in the trick.

tylerhou•6h ago
> AI already blows most people out of an IQ test at a fraction of the computational power of a brain

AFAIK, IQ tests used in psychological evaluations do not contain any randomness so exact answers are almost always in distribution. I haven't seen someone compare AI to an IQ test that is not in distribution.

On ARC-AGI, which is mildly similar to a randomly generated IQ test, humans still are much better than LLMs. https://arcprize.org/ (scroll down for chart)

jstanley•5h ago
I scrolled down but didn't find a chart comparing average human performance to AI performance.

The only chart I found was comparing the costs of different models.

tylerhou•5h ago
Sorry, you're right that the chart on the home page does not have human performance. The leaderboard chart does: https://arcprize.org/leaderboard. And the leaderboard by default shows scores for ARC-AGI 1 and 2. The models are much worse at 2 than 1; the best performing model scores around 15% (Grok 4, thinking), while humans are at ~100%.
jstanley•4h ago
Thanks, and do we know if the humans are average people off the street, or unusually-intelligent people?

EDIT: OK, I see there are 3 types of humans:

"Avg. Mturker" does worst. "Stem Grad" and "Human Panel" are basically equivalent in terms of quality but differ in cost.

It's not obvious to me whether an average Mturker would be more or less clever than the average person. Mturk doesn't pay very well, so you'd think you'd have to be below average to want to do it. But potentially it attracts people of above-average intelligence who just happen to live in the third world?

rsynnott•3h ago
Additional caveat: some of the "avg mturker" cohort are almost certainly using LLMs to participate.
kissgyorgy•5h ago
> Firstly, what is AGI?

AGI is the biggest succesful scam in human history Sam Altman came up with to get the insane investment and hype they are getting. They are intentionally not defined what it is and when will be achieved, making it a never-reachable goal to keep the flow of money going. "we will be there in a couple of years", "this feels like AGI" was told every fucking GPT release.

It's the best interest for every AI lab to keep this lie going. They are not stupid, they know it can't be reached with the current state-of-the-art techniques, transformers, and even with the recent groundbreaking techniques like reasoning, and I think we are not even close.

mitthrowaway2•5h ago
> To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff.

It's probably a bigger step than that. For example humans learn from experience, current AIs are trained offline and then frozen and can only "learn" by stuffing their short term memory with notes, like someone with anterograde amnesia.

AI technology is changing each year, and AGI will probably be more different from today's transformer-based systems than they were from convolutional nets.

But it will probably incorporate ideas and components from today's LLMs, and just as importantly, it's development will probably be paid for from the pockets of investors who have been tantalized by today's AI and think AGI must soon follow.

I personally hope it's a long way off but it might only be a few key insights away at this point.

ben_w•5h ago
> Firstly, what is AGI? I've never heard a decent definition. Some say it's an AI that is as smart or as general as humans; some say it's an AI that's conscious.

Correct, but many have noticed this including Sam Altman in some interviews.

Everyone disagrees about all three initials of the initialism, plus if the whole even means something implied by those initials at all, plus treating each as a boolean rather than a number.

Also some people loudly reject this observation.

The definition I was using for "AGI" befor ChatGPT came out was met by ChatGPT-3.5: it's an AI that's general-purpose, it doesn't just e.g. play chess. But there's people who reject that it even counts as "AI" at all, despite, well, the Turing Test.

Anyway.

> I don't see how it could be a "step" by these definitions because nothing actually changes between a regular AI and an AGI. To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff.

That's a lot of things.

> Also, AI is already more capable than humans in almost every way except scale.

No, not really. There's specific metrics where some AI can beat a lot of humans, like playing chess, or how many languages it speaks to the level of a competent adult learner, and they can pay attention to a lot more than we can, and they can process data faster than we can, but…

…but LLMs* are only giving the illusion of intelligence by using superhuman speed and superhuman attention to make up for having a mind with the complexity of a squirrel's that's had half a million years of experience reading everything it can lay its metaphorical hands on.

* not VLMs, they're not fast enough or smart-looking enough yet

> AI already blows most people out of an IQ test at a fraction of the computational power of a brain, but that's because IQ tests compare competence to get relative intelligence; they don't test computation.

IQ tests are indeed bad, can be learned. This is demonstrable because the graph you may have seen for AI IQ ratings has two variants from the same people, one with a public IQ test and one with a private IQ test, and most of the AI do much much worse with the private test: https://trackingai.org/home

The problem you're pointing at is also why the ARC-AGI test exists, and indeed this shows that current AI aren't anything close to human performance.

> With that assumption, if AI can't be computationally stronger than humans, it's safe to say we won't have conscious computers for a while, but instead computers that act consciously instead.

If you think "AGI" is poorly defined, you'll be horrified to learn that "consciousness" has something like 40 definitions. Nobody knows where it comes from in our brain structures or how much structure we need to have it. Is a mouse conscious? If so, there's enough compute going on for many AI models to be also. But the quantity of compute is likely a red herring, and the compute exists even if it's not running an AI, just as our brain chemistry still goes on while we're sleeping.

> Does that mean that AI from hear on out is a waste that does nothing but take control from people while benefiting us the same?

Yes, but such people will take any opportunity or tech to do so, we've seen that since "automation" meant "steam engine".

> What is ChatGPT going to look like in 5 years? Am I just going to type in "do my taxes," and it's just going to do whatever it wants on my pc until my taxes are done?

If you're lucky. I think probably not, but 5 years is too long to rule out an architectural breakthrough that makes them much less error-prone; the Transformer models in 2020 were unimpressive for anything except translation.

> Why would I ever want that over a system designed to do my taxes correctly EVERY TIME by accountants?

$$$; plus accountants aren't perfect, they're only human.

> One thing I know about AI is it is slow as a mf, AI is great but you really have to think, we really just built a giant dictionary guy who's going to have the same problems human employees have.

If we're lucky.

Most likely it will continue to have new and exciting problems.

dudefeliciano•5h ago
> Also, AI is already more capable than humans in almost every way except scale.

Not at all, AI still ridiculously fails at tasks that are quite simple for humans/children.

trabant00•4h ago
I define AGI as the only real AI that can exist. Anything less is just mimicry. To give a stupid/simplified example: a truck driving specialized "AI" would not be able to decide when to stop or not when somebody steps in front of it, making it trivial to rob "AI" driven trucks. To decide it needs to understand the kinds of people that exist, their motivations, the laws, etc. So it needs to be an AGI. Otherwise it will make horrible mistakes we don't even think are possible, or are so uncommon that when they happen to a human they make the news.