frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Scribble-based forecasting and AI 2027

https://dynomight.net/scribbles/
55•venkii•7mo ago

Comments

keeganpoppen•7mo ago
this is actually quite brilliant. and articulates the value and utility of subjective forecasting-- something i too find somewhat underrated-- extremely clearly and convincingly. and same goes for the biases we have toward reducing things to a mathematical model and then treating that model as more "credible" despite there being (1) an infinite universe of possible models, so you can use them to "say" whatever you want anyway and (2) it complects the thing being modeled with some mathematical phenomenon, which is not always a profitable approach.

the scribble method is, of course, quite sensitive to the number of hypotheses you choose to consider, as it effectively considers them all to be of equal probability, but it also surfaces a lot of interesting interactions between different hypotheses that have nothing to do with each other, but still have effectively the "same" prediction at various points in time. and i don't see any reason that you can't just be thoughtful about what "shapes" you choose to include and in what quantity-- basically like a meta-subjective model of which models are most likely or something haha. that said, there's also some value in the low-res aspect of just drawing the line-- you can articulate exactly what path you are thinking without having to pin that thinking to some model that doesn't actually add anything to the prediction other than fitting the same shape as what is in your mind.

groby_b•7mo ago
At least for me, the core criticism of AI 2027 was always that it was an extremely simplistic "number go up, therefore AGI", with some nice fiction-y words around it.

The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)

The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.

Definitely going to incorporate this into my reasoning toolkit!

ben_w•7mo ago
To me, 2027 looks like a case of writing the conclusion first and then trying to explain backwards how it happens.

If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).

But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.

groby_b•7mo ago
It still misses the fact AI is nowhere close to self-improvement.

In fact, there was a paper out on Friday that shows they're impressively bad at it: https://arxiv.org/abs/2506.22419

ben_w•7mo ago
Sure, but that's kinda what I'm saying they're doing wrong.

One of the core claims 2027 is making is, to paraphrase, we get AI to help researchers do the research. If we just presume that this happens (which I'm saying is a mistake), then the AI helps researchers research how to make AI self-improve. But there's not any obvious reason for me to expect that.

I mean, even aside from the narrow issue that the METR report earlier this year is showing that AI could (at the time) only do with 80% success tasks that would take a domain expert 15 minutes, and that this time horizon doubles every 7 months which would take them to being useful helpers for half-to-two-day tasks over 2027 which is still much less than needed for this kind of thing, there's still a lot of unknowns about where we are in what might be a sigmoid for unrealised efficiency gains in such code.

Anyway, this is a much more thorough critique than I'm going to give: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

MarkusQ•7mo ago
Another useful trick: plot the same data several ways (e.g. if you were playing with Moore's law you might plot (log) {transistors/cm²,"ops/sec","clock speed","ops/sec/$" etc.} their inverses vs time, as well as things like "how many digits of π can you compute for $1", "multiples of total world compute in 1970") and do the same extrapolation trick on each.

You _should_ expect to see roughly comparable results, but often you don't and when you don't it can reveal hidden assumptions/flawed thinking.

crabl•7mo ago
Interesting! My first thought looking at the scribble chart was "isn't this Monte Carlo simulation?" but reading further it seems more aligned with the "third way" that William Briggs describes in his book Uncertainty[1]. He argues we should focus on direct probability statements about observables over getting lost in parameter estimation or hypothesis testing.

^[1]: https://link.springer.com/book/10.1007/978-3-319-39756-6

empiko•7mo ago
To be honest, I expected the punchline to be about how randomly drawing lines is the same nonsense as using simplistic mathematical modeling without considering the underlying phenomenon. But the punchline never came.

Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.

Fraterkes•7mo ago
Im sorry, I think the line scribbling idea is neat but the most salient part of this prediction (how longs this going to take) depends utterly on the scale of the x-axis. If you made x go to 2200 instead of 2050 you could overlay the exact same set of “plausible” lines.
myrmidon•7mo ago
I do agree that the method is sensitive to X-scaling (and also Y-scale, which is logarithmic here!)-- but the "methodology" is at least defensible: scale X/Y to make existing data appear linear and make the "linear extrapolation in scribble space" meet the deadline at the middle of your X-axis.

I'm honestly kinda curious how well this "scribble-forecasting" actually works, but to me this sounds like it could be better than you'd expect from something this silly (but I honestly think that most utility comes from suitably picking between linear, log and semi-log plotspace, allowing you to approximate any linear, polynomial or exponential relationship with a straight scribble...)

Fraterkes•7mo ago
Ah I guess you are completely right about that. I still don't think the article is very substansive but I agree my critiscism isn't really fair.