frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
1•kiddz•41s ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
1•a_n•4m ago•1 comments

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
1•logicprog•10m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•10m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
3•todsacerdoti•11m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•12m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•13m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•16m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
1•tzury•17m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•19m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•22m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•26m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
2•dev_tty01•28m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•30m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•37m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•37m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•38m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•38m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•41m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•42m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•43m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•44m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•46m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•47m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•48m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•50m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
9•geox•53m ago•1 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
5•yi_wang•54m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•58m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•1h ago•0 comments
Open in hackernews

Scribble-based forecasting and AI 2027

https://dynomight.net/scribbles/
55•venkii•7mo ago

Comments

keeganpoppen•7mo ago
this is actually quite brilliant. and articulates the value and utility of subjective forecasting-- something i too find somewhat underrated-- extremely clearly and convincingly. and same goes for the biases we have toward reducing things to a mathematical model and then treating that model as more "credible" despite there being (1) an infinite universe of possible models, so you can use them to "say" whatever you want anyway and (2) it complects the thing being modeled with some mathematical phenomenon, which is not always a profitable approach.

the scribble method is, of course, quite sensitive to the number of hypotheses you choose to consider, as it effectively considers them all to be of equal probability, but it also surfaces a lot of interesting interactions between different hypotheses that have nothing to do with each other, but still have effectively the "same" prediction at various points in time. and i don't see any reason that you can't just be thoughtful about what "shapes" you choose to include and in what quantity-- basically like a meta-subjective model of which models are most likely or something haha. that said, there's also some value in the low-res aspect of just drawing the line-- you can articulate exactly what path you are thinking without having to pin that thinking to some model that doesn't actually add anything to the prediction other than fitting the same shape as what is in your mind.

groby_b•7mo ago
At least for me, the core criticism of AI 2027 was always that it was an extremely simplistic "number go up, therefore AGI", with some nice fiction-y words around it.

The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)

The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.

Definitely going to incorporate this into my reasoning toolkit!

ben_w•7mo ago
To me, 2027 looks like a case of writing the conclusion first and then trying to explain backwards how it happens.

If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).

But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.

groby_b•7mo ago
It still misses the fact AI is nowhere close to self-improvement.

In fact, there was a paper out on Friday that shows they're impressively bad at it: https://arxiv.org/abs/2506.22419

ben_w•7mo ago
Sure, but that's kinda what I'm saying they're doing wrong.

One of the core claims 2027 is making is, to paraphrase, we get AI to help researchers do the research. If we just presume that this happens (which I'm saying is a mistake), then the AI helps researchers research how to make AI self-improve. But there's not any obvious reason for me to expect that.

I mean, even aside from the narrow issue that the METR report earlier this year is showing that AI could (at the time) only do with 80% success tasks that would take a domain expert 15 minutes, and that this time horizon doubles every 7 months which would take them to being useful helpers for half-to-two-day tasks over 2027 which is still much less than needed for this kind of thing, there's still a lot of unknowns about where we are in what might be a sigmoid for unrealised efficiency gains in such code.

Anyway, this is a much more thorough critique than I'm going to give: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

MarkusQ•7mo ago
Another useful trick: plot the same data several ways (e.g. if you were playing with Moore's law you might plot (log) {transistors/cm²,"ops/sec","clock speed","ops/sec/$" etc.} their inverses vs time, as well as things like "how many digits of π can you compute for $1", "multiples of total world compute in 1970") and do the same extrapolation trick on each.

You _should_ expect to see roughly comparable results, but often you don't and when you don't it can reveal hidden assumptions/flawed thinking.

crabl•7mo ago
Interesting! My first thought looking at the scribble chart was "isn't this Monte Carlo simulation?" but reading further it seems more aligned with the "third way" that William Briggs describes in his book Uncertainty[1]. He argues we should focus on direct probability statements about observables over getting lost in parameter estimation or hypothesis testing.

^[1]: https://link.springer.com/book/10.1007/978-3-319-39756-6

empiko•7mo ago
To be honest, I expected the punchline to be about how randomly drawing lines is the same nonsense as using simplistic mathematical modeling without considering the underlying phenomenon. But the punchline never came.

Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.

Fraterkes•7mo ago
Im sorry, I think the line scribbling idea is neat but the most salient part of this prediction (how longs this going to take) depends utterly on the scale of the x-axis. If you made x go to 2200 instead of 2050 you could overlay the exact same set of “plausible” lines.
myrmidon•7mo ago
I do agree that the method is sensitive to X-scaling (and also Y-scale, which is logarithmic here!)-- but the "methodology" is at least defensible: scale X/Y to make existing data appear linear and make the "linear extrapolation in scribble space" meet the deadline at the middle of your X-axis.

I'm honestly kinda curious how well this "scribble-forecasting" actually works, but to me this sounds like it could be better than you'd expect from something this silly (but I honestly think that most utility comes from suitably picking between linear, log and semi-log plotspace, allowing you to approximate any linear, polynomial or exponential relationship with a straight scribble...)

Fraterkes•7mo ago
Ah I guess you are completely right about that. I still don't think the article is very substansive but I agree my critiscism isn't really fair.