frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•1m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•1m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•2m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•8m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•16m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
3•keepamovin•17m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•20m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•22m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•22m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•27m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•28m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•28m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•31m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
2•breve•32m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•35m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•37m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•40m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•41m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
6•tempodox•41m ago•2 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•45m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•48m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
8•petethomas•52m ago•3 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•56m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•1h ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments
Open in hackernews

Scribble-based forecasting and AI 2027

https://dynomight.net/scribbles/
55•venkii•7mo ago

Comments

keeganpoppen•7mo ago
this is actually quite brilliant. and articulates the value and utility of subjective forecasting-- something i too find somewhat underrated-- extremely clearly and convincingly. and same goes for the biases we have toward reducing things to a mathematical model and then treating that model as more "credible" despite there being (1) an infinite universe of possible models, so you can use them to "say" whatever you want anyway and (2) it complects the thing being modeled with some mathematical phenomenon, which is not always a profitable approach.

the scribble method is, of course, quite sensitive to the number of hypotheses you choose to consider, as it effectively considers them all to be of equal probability, but it also surfaces a lot of interesting interactions between different hypotheses that have nothing to do with each other, but still have effectively the "same" prediction at various points in time. and i don't see any reason that you can't just be thoughtful about what "shapes" you choose to include and in what quantity-- basically like a meta-subjective model of which models are most likely or something haha. that said, there's also some value in the low-res aspect of just drawing the line-- you can articulate exactly what path you are thinking without having to pin that thinking to some model that doesn't actually add anything to the prediction other than fitting the same shape as what is in your mind.

groby_b•7mo ago
At least for me, the core criticism of AI 2027 was always that it was an extremely simplistic "number go up, therefore AGI", with some nice fiction-y words around it.

The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)

The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.

Definitely going to incorporate this into my reasoning toolkit!

ben_w•7mo ago
To me, 2027 looks like a case of writing the conclusion first and then trying to explain backwards how it happens.

If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).

But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.

groby_b•7mo ago
It still misses the fact AI is nowhere close to self-improvement.

In fact, there was a paper out on Friday that shows they're impressively bad at it: https://arxiv.org/abs/2506.22419

ben_w•7mo ago
Sure, but that's kinda what I'm saying they're doing wrong.

One of the core claims 2027 is making is, to paraphrase, we get AI to help researchers do the research. If we just presume that this happens (which I'm saying is a mistake), then the AI helps researchers research how to make AI self-improve. But there's not any obvious reason for me to expect that.

I mean, even aside from the narrow issue that the METR report earlier this year is showing that AI could (at the time) only do with 80% success tasks that would take a domain expert 15 minutes, and that this time horizon doubles every 7 months which would take them to being useful helpers for half-to-two-day tasks over 2027 which is still much less than needed for this kind of thing, there's still a lot of unknowns about where we are in what might be a sigmoid for unrealised efficiency gains in such code.

Anyway, this is a much more thorough critique than I'm going to give: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

MarkusQ•7mo ago
Another useful trick: plot the same data several ways (e.g. if you were playing with Moore's law you might plot (log) {transistors/cm²,"ops/sec","clock speed","ops/sec/$" etc.} their inverses vs time, as well as things like "how many digits of π can you compute for $1", "multiples of total world compute in 1970") and do the same extrapolation trick on each.

You _should_ expect to see roughly comparable results, but often you don't and when you don't it can reveal hidden assumptions/flawed thinking.

crabl•7mo ago
Interesting! My first thought looking at the scribble chart was "isn't this Monte Carlo simulation?" but reading further it seems more aligned with the "third way" that William Briggs describes in his book Uncertainty[1]. He argues we should focus on direct probability statements about observables over getting lost in parameter estimation or hypothesis testing.

^[1]: https://link.springer.com/book/10.1007/978-3-319-39756-6

empiko•7mo ago
To be honest, I expected the punchline to be about how randomly drawing lines is the same nonsense as using simplistic mathematical modeling without considering the underlying phenomenon. But the punchline never came.

Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.

Fraterkes•7mo ago
Im sorry, I think the line scribbling idea is neat but the most salient part of this prediction (how longs this going to take) depends utterly on the scale of the x-axis. If you made x go to 2200 instead of 2050 you could overlay the exact same set of “plausible” lines.
myrmidon•7mo ago
I do agree that the method is sensitive to X-scaling (and also Y-scale, which is logarithmic here!)-- but the "methodology" is at least defensible: scale X/Y to make existing data appear linear and make the "linear extrapolation in scribble space" meet the deadline at the middle of your X-axis.

I'm honestly kinda curious how well this "scribble-forecasting" actually works, but to me this sounds like it could be better than you'd expect from something this silly (but I honestly think that most utility comes from suitably picking between linear, log and semi-log plotspace, allowing you to approximate any linear, polynomial or exponential relationship with a straight scribble...)

Fraterkes•7mo ago
Ah I guess you are completely right about that. I still don't think the article is very substansive but I agree my critiscism isn't really fair.