frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Jmail – Google Suite for Epstein files

https://www.jmail.world
807•lukeigel•13h ago•158 comments

Measuring AI Ability to Complete Long Tasks

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
137•spicypete•6h ago•88 comments

Backing up Spotify

https://annas-archive.li/blog/backing-up-spotify.html
1245•vitplister•16h ago•417 comments

Indoor tanning makes youthful skin much older on a genetic level

https://www.ucsf.edu/news/2025/12/431206/indoor-tanning-makes-youthful-skin-much-older-genetic-level
55•SanjayMehta•5h ago•14 comments

Inca Stone Masonry

https://www.earthasweknowit.com/pages/inca_construction
29•jppope•3h ago•5 comments

Isengard in Oxford

https://lareviewofbooks.org/article/isengard-in-oxford/
44•lermontov•4h ago•2 comments

Ruby website redesigned

https://www.ruby-lang.org/en/
75•psxuaw•3h ago•16 comments

Ireland’s Diarmuid Early wins world Microsoft Excel title

https://www.bbc.com/news/articles/cj4qzgvxxgvo
242•1659447091•14h ago•84 comments

Claude in Chrome

https://claude.com/chrome
201•ianrahman•13h ago•102 comments

Go ahead, self-host Postgres

https://pierce.dev/notes/go-ahead-self-host-postgres#user-content-fn-1
531•pavel_lishin•18h ago•311 comments

Pure Silicon Demo Coding: No CPU, No Memory, Just 4k Gates

https://www.a1k0n.net/2025/12/19/tiny-tapeout-demo.html
348•a1k0n•17h ago•50 comments

Log level 'error' should mean that something needs to be fixed

https://utcc.utoronto.ca/~cks/space/blog/programming/ErrorsShouldRequireFixing
385•todsacerdoti•3d ago•239 comments

Show HN: Open-source Markdown research tool written in Rust – Ekphos

https://github.com/hanebox/ekphos
10•haneboxx•4d ago•2 comments

OpenSCAD is kinda neat

https://nuxx.net/blog/2025/12/20/openscad-is-kinda-neat/
245•c0nsumer•16h ago•176 comments

Big GPUs don't need big PCs

https://www.jeffgeerling.com/blog/2025/big-gpus-dont-need-big-pcs
204•mikece•16h ago•75 comments

Flock and Cyble Inc. weaponize "cybercrime" takedowns to silence critics

https://haveibeenflocked.com/news/cyble-downtime
442•_a9•9h ago•77 comments

A visual editor for the Cursor Browser

https://cursor.com/blog/browser-visual-editor
7•evo_9•5d ago•4 comments

From devastation to wonder as Kangaroo Island bushfires lead to cave discoveries

https://www.abc.net.au/news/2025-12-13/more-than-150-caves-discovered-in-ki-after-devastating-bus...
58•speckx•5d ago•8 comments

Chomsky and the Two Cultures of Statistical Learning

https://norvig.com/chomsky.html
65•atomicnature•5d ago•48 comments

Clair Obscur having its Indie Game Game Of The Year award stripped due to AI use

https://www.thegamer.com/clair-obscur-expedition-33-indie-game-awards-goty-stripped-ai-use/
38•anigbrowl•3h ago•70 comments

Show HN: HN Wrapped 2025 - an LLM reviews your year on HN

https://hn-wrapped.kadoa.com?year=2025
199•hubraumhugo•20h ago•115 comments

Gemini 3 Pro vs. 2.5 Pro in Pokemon Crystal

https://blog.jcz.dev/gemini-3-pro-vs-25-pro-in-pokemon-crystal
286•alphabetting•4d ago•87 comments

I spent a week without IPv4 (2023)

https://www.apalrd.net/posts/2023/network_ipv6/
143•mahirsaid•16h ago•261 comments

What's New in Python 3.15

https://docs.python.org/3.15/whatsnew/3.15.html
82•azhenley•3d ago•19 comments

Italian bears living near villages have evolved to be smaller and less agressive

https://phys.org/news/2025-12-italian-villages-evolved-smaller-aggressive.html
88•wjSgoWPm5bWAhXB•5d ago•51 comments

Why do people leave comments on OpenBenches?

https://shkspr.mobi/blog/2025/12/why-do-people-leave-comments-on-openbenches/
165•sedboyz•18h ago•14 comments

You have reached the end of the internet (2006)

https://hmpg.net/
156•raytopia•17h ago•46 comments

Make the eyes go away

https://hexeditreality.com/posts/make-the-eyes-go-away/
8•llllm•3d ago•1 comments

Biscuit is a specialized PostgreSQL index for fast pattern matching LIKE queries

https://github.com/CrystallineCore/Biscuit
104•eatonphil•4d ago•17 comments

Skills Officially Comes to Codex

https://developers.openai.com/codex/skills/
282•rochansinha•1d ago•125 comments
Open in hackernews

I doubt that anything resembling genuine AGI is within reach of current AI tools

https://mathstodon.xyz/@tao/115722360006034040
66•gmays•5h ago

Comments

mindcrime•4h ago
Terry Tao is a genius, and I am not. So I probably have no standing to claim to disagree with him. But I find this post less than fulfilling.

For starters, I think we can rightly ask what it means to say "genuine artificial general intelligence", as opposed to just "artificial general intelligence". Actually, I think it's fair to ask what "genuine artificial" $ANYTHING would be.

I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence". Something like that seems to be what a lot of people are saying when they talk about AI and make claims like "that's not real AI". But for myself, I reject the notion that we need "genuine artificial general intelligence" that works like human intelligence in order to say we have artificial general intelligence. Human intelligence is a nice existence proof that some sort of "general intelligence" is possible, and a nice example to model after, but the marquee sign does say artificial at the end of the day.

Beyond that... I know, I know - it's the oldest cliche in the world, but I will fall back on it because it's still valid, no matter how trite. We don't say "airplanes don't really fly" because they don't use the exact same mechanism as birds. And I don't see any reason to say that an AI system isn't "really intelligent" if it doesn't use the same mechanism as human.

Now maybe I'm wrong and Terry meant something altogether different, and all of this is moot. But it felt worth writing this out, because I feel like a lot of commenters on this subject engage in a line of thinking like what is described above, and I think it's a poor way of viewing the issue no matter who is doing it.

npinsker•1h ago
> I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence".

I think he means "something that can discover new areas of mathematics".

dr_dshiv•1h ago
I’d love to take that bet
mindcrime•1h ago
Very reasonable, given his background!

That does seem awfully specific though, in the context of talking about "general" intelligence. But I suppose it could rightly be argued that any intelligence capable of "discovering new areas of mathematics" would inherently need to be fairly general.

themafia•1h ago
> That does seem awfully specific though

It's one of a large set of attributes you would expect in something called "AGI."

enraged_camel•1h ago
The airplane analogy is a good one. Ultimately, if it quacks like a duck and walks like a duck, does it really matter if it’s a real duck or an artificial one? Perhaps only if something tries to eat it, or another duck tries to mate with it. In most other contexts though it could be a valid replacement.
clort•54m ago
Just out of interest though, can you suggest some of these other contexts where you might want a valid replacement for a duck that looked like one, walked like one and quacked like one but was not one?
alex43578•40m ago
Decoy for duck hunting?
omnimus•31m ago
Are you suggesting LLMs are decoy for investor hunting?
catoc•1h ago
I interpret “artificial” in “artificial general intelligence” as “non-biological”.

So in Tao’s statement I interpret “genuine” not as an adverb modifying the “artificial” adjective but as an attributive adjective modifying the noun “intelligence”, describing its quality… “genuine intelligence that is non-biological in nature”

mindcrime•1h ago
So in Tao’s statement I interpret “genuine” not as an adverb modifying the “artificial” adjective but as an attributive adjective modifying the noun “intelligence”, describing its quality… “genuine intelligence that is non-biological in nature”

That's definitely possible. But it seems redundant to phrase it that way. That is to say, the goal (the end goal anyway) of the AI enterprise has always been, at least as I've always understood it, to make "genuine intelligence that is non-biological in nature". That said, Terry is a mathematician, not an "AI person" so maybe it makes more sense when you look at it from that perspective. I've been immersed in AI stuff for 35+ years, so I may have developed a bit of myopia in some regards.

catoc•1h ago
I agree, it’s redundant. To us humans - to me at least - intelligence is always general (calculator: not; chimpansee: a little), so “general intelligence” can also already be considered redundant. Using “genuine” is more redundancy being heaped on (with the assumed goal of making a distinction between “genuine” AGI and tools that appear smart in limited domains)
scellus•56m ago
I find it odd that the post above is downvoted to grey, feels like some sort of latent war of viewpoints going on, like below some other AI posts. (Although these misvotes are usually fixed when the US wakes up.)

The point above is valid. I'd like to deconstruct the concept of intelligence even more. What humans are able to do is a relatively artificial collection of skills a physical and social organism needs. The so highly valued intelligence around math etc. is a corner case of those abilities.

There's no reason to think that human mathematical intelligence is unique by its structure, an isolated well-defined skill. Artificial systems are likely to be able to do much more, maybe not exactly the same peak ability, but adjacent ones, many of which will be superhuman and augmentative to what humans do. This will likely include "new math" in some sense too.

omnimus•7m ago
What everybody is looking for is imagination and invention. Current AI systems can give best guess statistical answer from dataset the've been fed. It is always compression.

The problem and what most people intuitively understand is that this compression is not enough. There is something more going on because people can come up with novel ideas/solutions and whats more important they can judge and figure out if the solution will work. So even if the core of the idea is “compressed” or “mixed” from past knowledge there is some other process going on that leads to the important part of invention-progress.

That is why people hate the term AI because it is just partial capability of “inteligence” or it might even be complete illusion of inteligence that is nowhere close what people would expect.

Peteragain•1h ago
Exactly! I am going for "glorified auto complete" is far more useful than it seems. In GOFAI terms, it does case-based reasoning.. but better.
Izikiel43•49m ago
I call it clippy’s revengeance
Davidzheng•1h ago
The text continues "with current AI tools" which is not clearly defined to me (does it mean current Gen + scaffold? Anything which is llm reasoning model? Anything built with a large llm inside? ). In any case, the title is misleading for not containing the end of the sentence. Please can we fix the title?
Davidzheng•1h ago
Also i think the main source of interest is because it is said by Terry, so that should be in the title too.
blobbers•1h ago
I think what Terry is saying is that with the current set of tools, there are classes of problems requiring cleverness: where you can guess and check (glorified autocomplete), check answer, fail and then add information from failure and repeat.

I guess ultimately what is intelligence? We compact our memories, forget things, and try repeatedly. Our inputs are a bit more diverse but ultimately we autocomplete our lives. Hmm… maybe we’ve already achieved this.

mentalgear•1h ago
Some researchers proposed using, instead of the term "AI", the much more fitting "self-parametrising probabilistic model" or just advanced auto-complete - that would certainly take the hype-inducing marketing PR away.
attendant3446•55m ago
The term "AI" didn't make sense from the beginning, but I guess it sounded cool and that's why everything is "AI" now. And I doubt it will change, regardless of its correctness.
metalman•52m ago
AI is intermitent wipers, for words, and the two are completly tied, as the perfect test for AI, will be to run intermitent wipers, to everybodys satisfaction.
pavlov•48m ago
That’s like arguing that washing machines should be called rapid-rotation water agitators.

It’s the result that consumers are interested in, not the mechanics of how it’s achieved. Software engineers are often extraordinarily bad at seeing the difference because they’re so interested in the implementation details.

ForHackernews•46m ago
I'd be mad if washing machines were marketed as a "robot maid"
pavlov•43m ago
A woman from 1825 would probably happily accept that description though (notwithstanding that the word “robot” wasn’t invented yet).

A machine that magically replaces several hours of her manual work? As far as she’s concerned, it’s a specialized maid that doesn’t eat at her table and never gets sick.

auggierose•38m ago
Machines do get "sick" though, and they eat electricity.
pavlov•28m ago
Negligible cost compared to a real maid in 1825. The washing machine also doesn’t get pregnant by your teenage son and doesn’t run away one night with your silver spoons — the upkeep risks and replacement costs are much lower.
omnimus•37m ago
Shame we are in 2025 huh? Ask someone today if they accept washing machine as robot maid.
pavlov•31m ago
The point is that, as far as development of AI is concerned, 2025 consumers are in the same position as the 1825 housewife.

In both cases, automation of what was previously human labor is very early and they’ve seen almost nothing yet.

I agree that in the year 2225 people are not going to consider basic LLMs artificial intelligences, just like we don’t consider a washing machine a maid replacement anymore.

kylebyte•42m ago
The problem is that intelligence isn't the result, or at the very least the ideas that word evokes in people don't match the actual capabilities of the machine.

Washing is a useful word to describe what that machine does. Our current setup is like if washing machines were called "badness removers," and there was a widespread belief that we were only a few years out from a new model of washing machine being able to cure diseases.

lxgr•31m ago
Arguably there isn't even a widely shared, coherent definition of intelligence: To some people, it might mean pure problem solving without in-task learning; others equate it with encyclopedic knowledge etc.

Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".

red75prime•45m ago
It's a nice naming, fellow language-capable electrobiochemical autonomous agent.
dist-epoch•28m ago
The proof of Riemann hypothesis is [....autocomplete here...]
moktonar•1h ago
There’s a guaranteed path to AGI, but it’s blocked behind computational complexity. Finding an efficient algorithm to simulate Quantum Mechanics should be top priority for those seeking AGI. A more promising way around it is using Quantum Computing, but we’ll have to wait for that to become good enough..
themafia•1h ago
Required energy density at the necessary scale will be your next hurdle.
legulere•53m ago
How would simulating quantum mechanics help with AGI?
nddkkfkf•37m ago
Obviously, quantum supremacy is semiologically orthogonal to AGI (Artificial General Inteligence) ontological recursive synapses... this is trivial.
nddkkfkf•34m ago
now buy the stock
moktonar•19m ago
By simulating it
legulere•7m ago
What exactly should get simulated and how do you think quantum mechanics will help with this?
lxgr•15m ago
That would arguably not be artificial intelligence, but rather simulated natural intelligence.

It also seems orders of magnitude less resource efficient than higher-level approaches.

relistan•59m ago
These things work well on the extremely limited task impetus that we give them. Even if we sidestep the question of whether or not LLMs are actually on the path to AGI, Imagine instead the amount of computing and electrical power required with current computing methods and hardware in order to respond to and process all the input handled by a person at every moment of the day. Somewhere in between current inputs and handling the full load of inputs the brain handles may lie “AGI” but it’s not clear there is anything like that on the near horizon, if only because of computing power constraints.
trio8453•32m ago
> This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing

Useful = great. We've made incredible progress in the past 3-5 years.

The people who are disappointed have their standards and expectations set at "science fiction".

lxgr•29m ago
I think many people are now learning that their definition of intelligence was actually not very precise.

From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)

knallfrosch•22m ago
I remember when the goal posts were set at the "Turing test."

That's certainly not coming back.

Taek•31m ago
We seem to be moving the goalposts on AGI, are we not? 5 years ago, the argument that AGI wasn't here yet was that you couldn't take something like AlphaGo and use it to play chess. If you wanted that, you had to do a new training run with new training data.

But now, we have LLMs that can reliably beat video games like Pokemon, without any specialized training for playing video games. And those same LLMs can write code, do math, write poetry, be language tutors, find optimal flight routes from one city to another during the busy Christmas season, etc.

How does that not fit the definition of "General Intelligence"? It's literally as capable as a high school student for almost any general task you throw it at.

lxgr•28m ago
I think we're noticing that our goalposts for AGI were largely "we'll recognize it when we see it", and now as we are getting to some interesting places, it turns out that different people actually understood very different things by that.