frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Project Gutenberg – keeps getting better

https://www.gutenberg.org/
197•JSeiko•1h ago•59 comments

A 0-click exploit chain for the Pixel 10

https://projectzero.google/2026/05/pixel-10-exploit.html
197•happyhardcore•4h ago•84 comments

We don't know why Malawi is poor

https://newsletter.deenamousa.com/p/we-dont-know-why-malawi-is-poor
24•alphabetatango•46m ago•21 comments

Image-blaster: Creates 3D environments, SFX, and meshes from a single image

https://github.com/neilsonnn/image-blaster
28•MattRogish•2h ago•1 comments

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
192•yminsky•6h ago•37 comments

I designed a nibble-oriented CPU in Verilog to build a scientific calculator

https://github.com/gdevic/FPGA-Calculator
7•gdevic•27m ago•0 comments

Hightouch (YC S19) Is Hiring

https://hightouch.com/careers
1•joshwget•41m ago

I built Zenith: a live local-first fixed viewport planetarium

https://smorgasb.org/zenith-tech/
33•surprisetalk•1h ago•3 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
380•smusamashah•8h ago•98 comments

ASCII by Jason Scott

https://ascii.textfiles.com/
76•bookofjoe•3h ago•13 comments

Show HN: Watch a neural net learn to play Snake

https://ppo.gradexp.xyz/
38•c1b•1d ago•6 comments

High dimensional geometry is transforming the MRI industry (2017) [pdf]

https://www.ams.org/government/DonohoPresentation06-28-17Final.pdf
53•nill0•4h ago•13 comments

Radicle: Sovereign {code forge} built on Git

https://radicle.dev/
149•KolmogorovComp•5h ago•38 comments

A new book on Steve Jobs at NeXT

https://spectrum.ieee.org/steve-jobs-next-computer
117•rbanffy•7h ago•99 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
1008•arkadiyt•1d ago•533 comments

Aperio Lang

https://aperio-lang.github.io/aperio/introduction.html
7•mmcclure•30m ago•0 comments

Amazon workers under pressure to up their AI usage are making up tasks

https://www.fastcompany.com/91541586/amazon-workers-pressured-to-up-ai-use-extraneous-tasks
189•hackernj•4h ago•169 comments

U.S. DOJ demands Apple and Google unmask over 100k users of car-tinkering app

https://macdailynews.com/2026/05/15/u-s-doj-demands-apple-and-google-unmask-over-100000-users-of-...
12•tencentshill•14m ago•0 comments

OpenAI is connecting ChatGPT to bank accounts via Plaid

https://firethering.com/chatgpt-bank-account-plaid-openai/
51•steveharing1•1h ago•75 comments

A few words on DS4

https://antirez.com/news/165
394•caust1c•19h ago•161 comments

Trade Dollars with other startups. Book it as revenue

https://www.revswap.ai/
150•tormeh•4h ago•104 comments

NanoTDB – Golang Append-Only Time Series DB

https://github.com/aymanhs/nanotdb
37•aymanhs72•7h ago•5 comments

The sigmoids won't save you

https://www.astralcodexten.com/p/the-sigmoids-wont-save-you
49•Tomte•6h ago•63 comments

Ask HN: How to be SOC2 Type 2 compliant as a solo-entreprenuer?

73•sochix•10h ago•74 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
226•kspacewalk2•13h ago•87 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
83•adamnemecek•1d ago•17 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
664•allenleee•1d ago•159 comments

We are retiring our bug bounty program

https://turso.tech/blog/the-wonders-of-ai
313•tjek•4h ago•232 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
421•quadrige•23h ago•112 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
435•mikeevans•21h ago•222 comments
Open in hackernews

The sigmoids won't save you

https://www.astralcodexten.com/p/the-sigmoids-wont-save-you
49•Tomte•6h ago

Comments

philipallstar•1h ago
But they do explain the improvement of AI driving 2017-2021 vs 2022-2026.
nathan_compton•1h ago
A lot of words to say "The initial part of a sigmoidal curve is not very informative about the parameters of the sigmoid function in question."
inglor_cz•1h ago
That is true, but I generally enjoy reading a lot of words from Scott, who has a talent for writing.

The entire plot of the Lord of the Rings could probably be compressed into less than 10 kB of text too.

Edit: this seems to be a controversial comment, but IMHO a blog of Scott Alexander's type is an art form, not just a communication channel.

jeffreyrogers•36m ago
I find him more interesting when he talks about non-AI topics. Lots of other interesting people are like this too. I'd rather get my knowledge on AI from people who have unique insights into it. Scott has a lot of unique perspectives of his own, but his views on AI are bog-standard for his social group.
inglor_cz•19m ago
Frankly, me too, but he is still smart enough to introduce some grains of original thought even into those bog-standard views.
andai•1h ago
Well, curve shape aside, the high watermark might be lower than where it tapers off.

https://news.ycombinator.com/item?id=46199723

BoredPositron•1h ago
If you use the log scale you'll see that the time horizon of opus 4.6 was as expected...
afthonos•58m ago
As expected by the exponential. The Wharton study was predicting when the exponential would turn into a sigmoid.
ReptileMan•38m ago
Everything is linear on a log log scale with a fat marker.
gm678•1h ago
I don't know what the Y-axis is supposed to be on that Wharton AI capabilities graph, but I am not really convinced that Opus 4.6 has more than double the intelligence/capability/whatever of GPT 5.1 Max.
BoredPositron•1h ago
https://metr.org/time-horizons/ on linear scale. Clickbait garbage article as most of his in the last year.
afthonos•1h ago
…yeah, that’s where you see the exponential?
NitpickLawyer•1h ago
IIRC that graph tracks capabilities as time_to_solve a task for humans (i.e. the model can now handle tasks that usually take a human ~8h). Which, depending on what tasks you look at, could be a reasonable finding. I could see Opus 4.6 handling tasks that take ~8h for humans, and that 5.1 couldn't previously handle (with 5.1 being "limited" at 4h tasks let's say). It is a bit arbitrary, but I think this is what they're tracking.
lukan•38m ago
"It is a bit arbitrary, but I think this is what they're tracking."

I don't know if they can get their numbers right this way, but this seems a way more useful metric, than theoretic capabilities.

cyanydeez•21m ago
ok, but arn't you just measuring efficiency and not the big I in AGI improvements.
lukan•15m ago
Yes, but this study was not about that and "just efficiency" is actually what most people are after.

At least I want AI to solve my problems, not score high on a academic leaderboard.

jrumbut•20m ago
Without knowing more about their methodology, it seems like a lot of the recent improvements have involved the AI itself taking time to complete the task.

At first the models turned a 5 minute task into a 5 second task (by 5 seconds I mean a very short amount of time, not precisely 5 seconds). Then they turned a 15 minute task into a 5 second task.

Opus 4.6 completes 8 hour tasks all the time but (at least in my experience) it isn't spitting the answer out in 5 seconds anymore. It's using chain of thought and tools and the time to completion is measured in minutes or maybe hours.

In my experiments with local LLMs, a substantial part of the gap between frontier and local (for everyday use) is in tooling and infrastructure.

That is why I am sympathetic to the idea we are leveling off. But to bring in the air speed example from the article, I don't think we've reached the equivalent of the ramjet yet. I suspect in the coming years there will be new architectures, new hardware, and new ways to get even more capable models.

MadxX79•7m ago
I don't know why people are so impressed by 8h.

I trained an LLM to write the whole Harry Potter series, and that took JK Rowling like 17 years.

For my next point on the graph, I'll train the LLM to write the Bible, something that took humans >1500 years.

myhf•35m ago
According to this article: whenever someone games a benchmark to make an upward chart on some y-axis, it's YOUR responsibility to prove how and why that trend can't continue indefinitely.

emoji face with eyes rolling upward

AnimalMuppet•32m ago
I'm pretty sure that gaming benchmarks can continue indefinitely.
skybrian•12m ago
Seems to me that the default is "I don't know what's going to happen" and if you're making a confident prediction, bring evidence.

Scott makes a Lindy effect argument which is plausible, but don't let that fool you, we still don't know what's going to happen.

strken•30m ago
Check out Re-Bench and HCAST.

The tasks are obviously all of the form "Go do this, and if you get the following output you passed". Setting up a web server apparently takes 15 minutes for a human, which is news to me since I'm able to search for https://gist.github.com/willurd/5720255, find the python one-liner, and copy it within about ten seconds.

Anyway, this is cool but it does not mean Claude can perform any human tasks that take less than 8 hours and are within its physical capabilities.

throwaway27448•22m ago
> more than double the intelligence/capability/whatever

I'm curious what people really mean when they say this. Intelligence is famously hard to define, let alone measure; it certainly doesn't scale linearly; it only loosely correlates to real-world qualities that are easy to measure; etc. Are you referring to coding ability or...?

adw•13m ago
https://podcasts.apple.com/us/podcast/machine-learning-stree... is a pretty good primer on METR, what it measures, and its limitations.
devmor•1h ago
"Exponentials all tend to become sigmoids but you can't predict exactly when" is a true statement, but I'm not sure it needed an article.

This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.

I really don't get the point of what I just read.

aspenmartin•35m ago
The point is the tiring arguments from AI skeptics saying “things are flattening, they have to” which while technically correct says nothing because no one knows when that will happen and we see no mechanism for this yet. Lindy’s law as a reasonable prediction under total uncertainty is interesting and insightful and a lot of people don’t know about it or why it holds. I did enjoy the reference to this!
addaon•1h ago
https://xkcd.com/605/
inglor_cz•1h ago
Hmmm, this is quite an interesting take by Scott.

Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).

But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.

A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.

krupan•48m ago
"There is an international arms race with China"

I keep seeing this. Where did it come from? Has China said that they intend to attack other countries using AI? Have other countries declared that they intend to attack China with AI?

Also, why does anyone believe that AI could actually be that dangerous, given it's inherent unpredictable and unreliable performance? I would be terrified to rely on AI in a life or death situation.

inglor_cz•45m ago
It was a metaphor. I meant, and later clarified, an intellectual arms race.

BTW your handle is an actual Czech word, minus a diacritic sign ("křupan"), and a bit amusing one. It basically means hillbilly. Not that it matters, just FYI.

Anyway: AI will be used in military context, and it probably already is. Both for target acquisition and maybe even driving the weapon itself. As of now, the Ukrainians are almost certainly operating some AI-enabled killer drones.

dmbche•43m ago
https://www.forbes.com/sites/greatspeculations/2025/11/25/wh...
aspenmartin•38m ago
AI in war is like Palintirs whole business model. You have a system that can effectively deal with ambiguity and has superhuman performance on reasoning plus superhuman physical abilities via embodiment…

Inherent unpredictable and unreliable performance is also quite the feature of human beings as well.

mitthrowaway2•15m ago
It's not a law per se, but there are rules for reasoning under uncertainty to get the most out of what limited knowledge you have, and Lindy's law arises from that. To do better than Lindy's law requires having additional information about the problem beyond just the one data point.
krupan•48m ago
News flash: predicting the future is hard
energy123•42m ago
The individual who is the best at predicting the future is predicting ASI and full labor automation by 2040:

https://xcancel.com/peterwildeford/status/202963666232244661...

gerikson•39m ago
Past results is no guarantee of future performance.
Aurornis•37m ago
> The individual who is the best at predicting the future

Going to need a big citation for that claim

margalabargala•26m ago
Source: trust me bro
margalabargala•27m ago
> The individual who is the best at predicting the future

Lol

layer8•19m ago
Predicting who will predict the future best is hard.
kubb•37m ago
If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.
adleyjulian•30m ago
1. It's not inevitable. 2. Those that see AI as an existential risk don't generally think it's a guarantee, but if it's say a 5% chance then that's worth addressing/mitigating. 3. That's not what this article was even about.
kubb•19m ago
Sounds like the burden is on you to explain either

  1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
  2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.
mitthrowaway2•20m ago
By that reasoning, why even warn people about anything? Why do road construction crews put up signs saying "ROAD CLOSED AHEAD" when you can just drive on and see for yourself?
kubb•16m ago
Indeed, why warn people about real things that exist in the world? That is EXACTLY the same as inciting fear about something imaginary (not even projected).
LarsDu88•34m ago
I think an interesting thing about recent AI developments is that its all happening right as we hit the diminishing returns side of another "exponential that's actually a sigmoid" which is Moore's law.

The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.

At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.

With the right level of research funding in hardware, the ceiling for AI can be very high.

cyanydeez•31m ago
they already did put a model into the silicon and it's crazy fast. https://chatjimmy.ai/

I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.

It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA

clickety_clack•23m ago
It would be pretty cool to have interchangeable usb keys with models on them.
throwaway27448•24m ago
Even at orders of magnitude greater speed, we've still hit diminishing returns for quality of output. We simply haven't found anything like superhuman reasoning ability, just superhuman (potentially) reasoning speed.
Brendinooo•30m ago
> then what is their model?

My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.

Ultimately, you can't make something look more realistic than real.

I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.

btilly•29m ago
Lindy’s Law is an absolute gem, that I'm keeping.

If we don't understand the fundamental limits to any particular kind of trend, our default assumption should be that it will continue for about as long as it has gone on already.

We can, in fact, easily put a confidence interval on this. With 90% odds we're not in the first 5% of the trend, or the last 5% of the trend. Therefore it will probably go on between 1/19th longer, and 19 times longer. With a median of as long as it has gone on so far.

This is deeply counterintuitive. When we expect something to last a finite time, every year it goes on, brings us a year closer to when it stops. But every year that it goes on properly brings the expectation that it will go on for a year longer still.

We're looking at a trend. We believe that it will be finite. Our intuition for that is that every year spent, is a year closer to the end. But our expectation becomes that every year spent, means that it will last yet another year more!

How can we apply that? A simple way is stocks. How long should we expect a rapidly growing company, to continue growing rapidly?

jerf•12m ago
It's an interesting idea, and it may be something that could be mathematically justified, but I do think this is an abuse of Lindy's Law in the absence of such a justification. Per Wikipedia [1]:

"The Lindy effect applies to non-perishable items, like books, those that do not have an "unavoidable expiration date"."

And later in the article you can see the mathematical formulation which says the law holds for things with a Pareto distribution [2]. I'd want to see some sort of good analysis that "the life span of exponential growth curves" is drawn from some Pareto distribution. I don't think it's completely out of the question. But I'm also nowhere near confident enough that it is a true statement to casually apply Lindy's Law to it.

[1]: https://en.wikipedia.org/wiki/Lindy_effect

[2]: https://en.wikipedia.org/wiki/Pareto_distribution

LPisGood•6m ago
This is the exact same heuristic used in CPU scheduling.

We expect fresh processes to terminate quickly and long running processes to last for a while longer.

itkovian_•26m ago
The other thing people don’t understand is exponential curves are self similar. The start of an exponential looks like an exponential. People always look at and think ‘well that’s it it’s exponential now, have missed it, can’t sustain’. Nope.

Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.

zkmon•24m ago
The curve is a smoothed step curve (y=1 if x>1 otherwise 0). Nature doesn't allow any change to happen instantly at any degree of rate of change. The curveis just a manifestation a change with exponential smoothening of the sharp corners.

For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.

patrickmay•22m ago
Stein's Law: "If something cannot go on forever, it will stop."
skybrian•10m ago
Yes, but figuring out when is the hard part.
OscarCunningham•21m ago
John D Cook gives more technical details here: "Trying to fit a logistic curve" https://www.johndcook.com/blog/2025/12/20/fit-logistic-curve...
bedobi•15m ago
Why is this author tolerated on Hacker News? He's not actually knowledgable about 99% of subjects he posts about.
simianparrot•8m ago
Because HN is YCombinator which has invested in probably hundreds of «AI» firms by now. Including OpenAI.

Allowing slop articles like this literally prints them evaluation money.

ngriffiths•1m ago
I think there are many ways someone with his lack of expertise can still be valuable, including:

- Making connections to other subjects that an expert would miss. The hall of fame of sigmoid predictions is just excellent, I already know I'm going to be reminded of it some time in the future. Very entertaining way to get the point across.

- Writing about tricky concepts in a very accessible and elegant way, which experts are notoriously bad at doing themselves - they are often optimizing for other specialists.

- Being able to write with an air of speculation and experimentation with ideas that experts and institutions often can't afford. Experts have to maintain their track record; Scott Alexander can say "lol just double the timeline"

janalsncm•12m ago
> What if you don’t fully understand the process? AI forecasters know some things (like how data centers work and how much it costs to build them). But they’re unsure about other things (researchers keep inventing new paradigms of data generation that get over data walls, but for how long?), and other things are entirely opaque (What is intelligence really? Why do scaling laws work? Might they just stop working at some point?) Is there anything you can do here?

This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.

One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.

dsign•9m ago
We did hit the sigmoid's plateau on airplane speed, but the applications of airplane speed are still coming (how fast can a Chinese company airship the PCB you ordered three minutes ago?). I expect the the same will happen with LLMs, though I also happen to believe things are just getting started on end capabilities.