frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Restricted data once again leaked on War Thunder forums

https://ukdefencejournal.org.uk/classified-data-once-again-leaked-on-war-thunder-forums/
1•ortusdux•50s ago•0 comments

NYT: Discussion of Sick Day Usage

https://www.nytimes.com/2025/06/21/magazine/sick-leave-days-ethics.html
1•cranky908canuck•58s ago•0 comments

A live comparison of 12 classless CSS frameworks on the same semantic HTML

https://hugo-classless.netlify.app/
1•mozanunal•1m ago•0 comments

Collections: Nitpicking Gladiator's Iconic Opening Battle, Part I

https://acoup.blog/2025/06/06/collections-nitpicking-gladiators-iconic-opening-battle-part-i/
1•diodorus•2m ago•0 comments

How to Read Bug Reports (2016)

https://www.massicotte.org/reading-bug-reports
1•Austin_Conlon•2m ago•0 comments

Philips – Fixables [video]

https://www.youtube.com/watch?v=De8qkIY5vJY
1•seretogis•4m ago•0 comments

First quantum-mechanical model of quasicrystals reveals why they exist

https://phys.org/news/2025-06-quantum-mechanical-quasicrystals-reveals.html
1•bookofjoe•4m ago•0 comments

Selling or hiring internationally? You're probably breaking the law

https://useportcall.com/blog/cross-border-growth-creates-invisible-compliance-risks
1•beecee•5m ago•0 comments

The Ford EBike Lineup

https://ford-bikes.com/
1•nateb2022•7m ago•0 comments

The cost of quick wins (with examples)

https://ottic.ai/blog/marketing-quick-wins-with-examples/
1•rafaepta•8m ago•0 comments

JVM Rainbow – Mixing Java, Scala, Kotlin and Groovy

https://github.com/Hakky54/java-tutorials/tree/main/jvm-rainbow
1•hakky54•9m ago•0 comments

Bogong moths use a stellar compass for long-distance navigation at night

https://www.nature.com/articles/s41586-025-09135-3
1•Anon84•10m ago•0 comments

Energy Challenges for Martian Colonists

https://www.youtube.com/watch?v=-_wY30rH3Xc
2•d_silin•19m ago•0 comments

Visualizing Homotopy Groups [video]

https://www.youtube.com/watch?v=CxGtAuJdjYI
1•lying4fun•29m ago•0 comments

Oil prices sink after Iranian strike on US airbase reduces fears of disruption

https://www.theguardian.com/business/2025/jun/23/oil-prices-iranian-strike-us-airbase
2•spzx•31m ago•0 comments

The end of Stop Killing Games [video]

https://www.youtube.com/watch?v=HIfRLujXtUo
1•zavertnik•35m ago•1 comments

Application compatibility for Windows 95 crashed a cash register

https://devblogs.microsoft.com/oldnewthing/20250610-00/?p=111260
2•OptionOfT•37m ago•0 comments

Show HN: Rewizo, a platform for casual online earning

https://www.rewizo.com/
1•Rafay2006•39m ago•1 comments

Show HN: I built an AI Powered Word Docs Editor

https://breezeai.live/
1•yashrajvrmaa•39m ago•1 comments

Driving the Rust Compiler to Compile Single Files as Shellcode

https://kirchware.com/Driving-the-Rust-Compiler-to-Compile-Single-Files-as-Shellcode
1•brson•40m ago•0 comments

Proficient Python: A free interactive online course

http://blog.pamelafox.org/2025/06/proficient-python-free-interactive.html
2•pamelafox•40m ago•0 comments

China's Many Ghost Towns of Abandoned Mansions (2024)

https://www.architecturaldigest.com/story/the-story-behind-the-many-ghost-towns-of-abandoned-mansions-across-china
3•mooreds•42m ago•0 comments

Weird Expressions in Rust

https://www.wakunguma.com/blog/rust-weird-expr
1•brson•42m ago•0 comments

Message from UW leadership on budget reductions

https://news.wisc.edu/message-from-uw-leadership-on-budget-reductions/
1•archy_•43m ago•0 comments

Innovation and Repetition (1990)

https://www.clunyjournal.com/p/innovation-and-repetition-rene-girard
1•crescit_eundo•44m ago•0 comments

GameTreeCalculator – calculate the optimal solution to any extensive-form game

https://gametreecalculator.com
3•actionflop•45m ago•0 comments

Tesla launches early access robotaxi program in Austin

https://twitter.com/SawyerMerritt/status/1936997202880081950
2•leesec•45m ago•1 comments

Fusing automated UI testing with scripts for effectively fuzzing Android apps

https://github.com/ecnusse/Kea2
1•tingsu•45m ago•1 comments

Colombian soldiers fought guerrillas. Now they're fighting for Mexican cartels

https://www.latimes.com/world-nation/story/2025-06-09/mexican-cartels-are-now-recruiting-colombian-mercenaries
3•PaulHoule•48m ago•0 comments

Death Clock

https://www.death-clock.org/
1•austinallegro•49m ago•0 comments
Open in hackernews

A deep critique of AI 2027's bad timeline models

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models
62•paulpauper•3h ago

Comments

brcmthrowaway•2h ago
So much bikeshedding and armchair expertise displayed in this field.
goatlover•2h ago
Which would that be, the arguments for ASI being near and how that could be apocalyptic, or the push back on those timelines and doomsday (or utopian) proclamations?
evilantnie•2h ago
I don’t think the real divide is “doom tomorrow” vs “nothing to worry about.” The crux is a pretty straightforward philosophical question "what does it even mean to generalize intelligence and agency", how much can scaling laws tell us about that?

The back-and-forth over σ²’s and growth exponents feels like theatrics that bury the actual debate.

vonneumannstan•1h ago
>The crux is a pretty straightforward philosophical question "what does it even mean to generalize intelligence and agency", how much can scaling laws tell us about that?

Truly a bizarre take. I'm sure the Dinosaurs also debated the possible smell and taste of the asteroid that was about to hit them. The real debate. lol.

ysofunny•2h ago
with all that signaling... it's almost like they're trying to communicate!!! who would've thought!?
TimPC•2h ago
This critique is fairly strong and offers a lot of insight into the critical thinking behind it. The parts of the math I've looked at do check out.
yodon•2h ago
So... both authors predict superhuman intelligence, defined as AI that can complete tasks that would take humans hundreds of hours, to be a thing "sometime in the next few years", both authors predict "probably not before 2027, but maybe" and both authors predict "probably not longer than 2032, but maybe", and one author seems to think their estimates are wildly better than those of the other author.

That's not quite the level of disagreement I was expecting given the title.

jollyllama•2h ago
That's not very investor of you
LegionMammal978•2h ago
As far as I can tell, the author of the critique specifically avoids espousing a timeline of his own. Indeed, he dislikes how these sorts of timeline models are used in general:

> I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.

In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].

[0] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...

[1] https://titotal.substack.com/p/ai-is-not-taking-over-materia...

ysofunny•1h ago
but what is the difference between a shoddy toy model and a real world pro "rigorous research"?

it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)

the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??

what do I get if I am correct? how should the incorrect lose?

LegionMammal978•1h ago
If you believe that there's any plausible chance of AGI causing a major catastrophe short of the immediate end of the world, then its precise nature can have all sorts of effects on how the catastrophe could unfold and how people should respond to it.
vonneumannstan•1h ago
For rationalists this is about as bad as disagreements can get...
TimPC•1h ago
He predicts it might be possible from model math but doesn't actually say what his prediction is. He also argues it's possible we are on a s-curve that levels out before superhuman intelligence.
sweezyjeezy•36m ago
I don't think the author of this article is making any strong prediction, in fact I think a lot of the article is a critique of whether such an extrapolation can be done meaningfully.

Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.

Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.

XorNot•16m ago
The primary question is always whether they'd have made those sorts of predictions based on the results they were seeing on the field from the same amount of time in the past.

Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.

lubujackson•1h ago
These predictions seem wildly reductive in any case and it seems like extrapolating AI's ability to complete task that would take a human 30 seconds -> 10 minutes is far different than going from 10 minutes to 5 years. For one reason, a 5 year task generally requires much more input and intent than a 10 minute task. Already we have ramped up from "enter a paragraph" to complicated Cursor rules and rich context prompts to get to where we are today. This is completely overlooked in these simple "graphs go up" predictions.
echelon•1h ago
I'm also interested in error rates multiplying for simple tasks.

A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?

kingstnap•15m ago
The recent Apple "LLMs can't reason yet" paper was exactly this. They just tested if models could run an exponential number of steps.

Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"

kypro•1h ago
As someone in the P(doom) > 90% category, I think in general making overly precise predictions are a really bad way to highlight AI risks (assuming that was the goal of AI 2027).

Making predictions that are too specific just opens you up to pushback from people who are more interested in critiquing the exact details of your softer predictions (such as those around timelines) rather than your hard predictions about likely outcomes. And while I think articles like this are valuable to refine timeline predictions, I find a lot of people use them as evidence to dismiss the stronger predictions made about the risks of ASI.

I think people like Nick Bostrom make much more convincing arguments about AI risk because they don't depend on overly detailed predictions which can be easily nit-picked at, but are instead much more general and focus more on the unique nature of the risks AI presents.

For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI. The fact we are rapidly developing a technology which most people would accept comes with at least some existential risk, that we can't predict the progress curve of, and where solutions would come with significant coordination problems should concern people without having to say it will happen in x number of years.

I think AI 2027 is interesting as a science fiction about potential futures we could be heading towards, but that's really it.

The problem with being an AI doomer is that you can't say "I told you so" if you're right so any personal predictions you make have no close to no expected pay-out, either socially or economically. This is different to other risks which if you predict accurately when others don't you can still benefit from.

I have no meaningful voice in this space so I'll just keep saying we're fucked because what does it matter what I think, but I wish there were more people with influence out there who were seriously thinking about how they can best influence rather than stroking their own own egos with future predictions, which even if I happen agree with do next to nothing to improve the distribution of outcomes.

Fraterkes•1h ago
I’m not trying to be disingenuous, but in what ways have you changed your life now that you belief theres >90% chance to an end to civilization/humanity? Are you living like a terminal cancer patient?

(Im sorry, I know its a crass question)

allturtles•1h ago
The person you're replying to said "For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI." So they are predicting >90% chance of doom, but not when that will happen. Given that there is already a 100% chance of death at some unknown point in the future, why would this cause GP to start living like a terminal cancer patient (presumably defined as someone with a >99% chance of death in the next year)?
lava_pidgeon•1h ago
I like to point out, that the existence of AGI in the future does change my potential future planning. So I am 35. Do I need save for pensions? Does it make to sense to start family? These aren't 1 year questions but 20 years ahead questions...
amarcheschi•1h ago
If you're so terrorized of Ai to not start a family despite wanting and being able to, it must be miserable (if) to eventually live through the years as everyone that tried to predict the end of the world did (except for those who died of other causes before the predicted end)
siddboots•1h ago
I think both approaches are useful. AI2027 presents a specific timeline in which a) the trajectory of tech is at least somewhat empirically grounded, and b) each step of the plot arc is plausible. There's a chance of it being convincing to a skeptic who had otherwise thought of the whole "rogue AI" scenario as a kind of magical thinking.
boznz•1h ago
I expect the predictions for fusion back in the 1950's and 1960's generated similar essays, they had not got to ignition but the science was solid; the 'science' with moving from AGI to ASI is not really that solid yet we have yet to achieve 'AI ignition' even in the lab. (Any AI's that have achieved consciousness feel free to disagree)
staunton•58m ago
This is a lot of text, details and hair splitting just to say "modeling things like this is bullshit". It's engaging "seriously" and "on the merits" with something that from the very start was just marketing fluff packaged as some kind of prediction.

I'm not sure if the author did anyone a favor with this write-up. More than anything, it buries the main point ("this kind of forecasting is fundamentally bullshit") under a bunch of complicated-sounding details that lend credibility to the original predictions, which the original authors now get to agrue about and thank people for pointing out "minor issues which we have now addressed in the updated version".

ed•27m ago
Anyone old enough to remember EPIC 2014? It was a viral flash video, released in 2004, about the future of Google and news reporting. I imagine 2027 will age similarly well.

https://youtu.be/LZXwdRBxZ0U