frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Lipinski's Rule of Five

https://en.wikipedia.org/wiki/Lipinski%27s_rule_of_five
1•PaulHoule•11m ago•0 comments

Anthropic Lands Victory in AI Case on Fair Use

https://www.wsj.com/tech/ai/anthropic-lands-partial-victory-in-ai-case-set-to-shape-future-rulings-e3560114
1•samspenc•13m ago•1 comments

Tesla Robotaxi videos show Elon's way behind Waymo

https://www.theregister.com/2025/06/24/tesla_robotaxi_austin/
1•thunderbong•16m ago•0 comments

Show HN: Plant Identifier App

https://serpapi.com/blog/build-plant-identifier-app-with-google-lens-api/
1•terrytys•19m ago•0 comments

Can NATO Keep It Together?

https://foreignpolicy.com/2025/06/20/nato-summit-hague-trump-russia-ukraine-alliance-defense-spending/
1•mooreds•22m ago•0 comments

Relight Your Dynamic Long Videos for Embodied Agents and Film Making

https://github.com/Linketic/TC-Light
1•XYZ_Entropy•22m ago•1 comments

Show HN: MyCashCube – A Free Budgeting App That Respects Your Privacy

https://www.mycashcube.com/
1•AmirNajari•27m ago•0 comments

UK may require Google to provide search choice and change ranking

https://techcrunch.com/2025/06/24/uk-may-require-google-to-give-users-alternative-search-options-and-rank-its-results-more-fairly/
1•redm•28m ago•0 comments

Web Du Bois – data scientist (2022)

https://blog.engora.com/2022/02/web-du-bois-data-scientist.html
1•Vermin2000•29m ago•0 comments

Leaked Fairphone 6 promo video unveils Essentials feature of brand-new slider

https://www.notebookcheck.net/Leaked-Fairphone-6-promo-video-unveils-surprising-feature-of-brand-new-slider.1043689.0.html
1•LorenDB•31m ago•0 comments

Abusing copyright strings to trick software into thinking it's on competitor PC

https://devblogs.microsoft.com/oldnewthing/20250624-00/?p=111299
2•paulmooreparks•32m ago•0 comments

Ask HN: Is anyone using AMD GPUs for their AI workloads?

2•technoabsurdist•36m ago•0 comments

MDX Docs

https://github.com/thequietmind/mdx-docs
1•thequietmind•37m ago•1 comments

A fluentbit plugin to collect data to database

https://github.com/CharellKing/fluentbit-output-database-plugin
1•mrandycome•37m ago•0 comments

Developing a Simple Universal Header Navigation Bar in HarmonyOS Next

1•flfljh•42m ago•0 comments

Stanford CS336 Language Modeling from Scratch

https://www.youtube.com/playlist?list=PLoROMvodv4rOY23Y0BoGoBGgQ1zmU_MT_
1•myth_drannon•42m ago•0 comments

Detailed Guide to Developing Flutter Plugins for HarmonyOS

1•flfljh•43m ago•0 comments

Azure SQL Managed Instance Storage Is Regularly as Slow as 60 Seconds

https://kendralittle.com/2024/12/18/azure-sql-managed-instance-storage-regularly-slow-60-seconds/
1•zX41ZdbW•45m ago•0 comments

How Synthflow AI is cutting through the noise in a loud AI voice category

https://techcrunch.com/2025/06/24/how-synthflow-ai-is-cutting-through-the-noise-in-a-loud-ai-voice-category/
1•codexy•45m ago•0 comments

Scientists have created healthy, fertile mice with two fathers

https://www.economist.com/science-and-technology/2025/06/24/scientists-have-created-healthy-fertile-mice-with-two-fathers
3•bdev12345•48m ago•0 comments

Ask HN: How do novelists feel about LLMs?

1•keepamovin•49m ago•2 comments

See how your website ranks on ChatGPT

https://www.propensia.ai/
1•LargePanda•50m ago•0 comments

OpenADP, needs volunteers to help prevent mass secret surveillance

https://openadp.org
2•WaywardGeek•53m ago•1 comments

The collective waste caused by poor documentation

https://shanrauf.com/archive/collective-waste-from-poor-documentation
1•delifue•56m ago•0 comments

Plan, Organize, and Monetize Your Podcast

https://outro.fm/
1•mooreds•57m ago•0 comments

Build your first iOS app on Linux / Windows

https://xtool.sh/tutorials/xtool/first-app/
2•todsacerdoti•1h ago•0 comments

AI Sexbots and the Boundaries of Love and Dignity in the Workplace

https://www.thepublicdiscourse.com/2025/03/97471/
2•StatsAreFun•1h ago•1 comments

Omega: Can LLMs Reason Outside the Box in Math?

https://arxiv.org/abs/2506.18880
2•marojejian•1h ago•1 comments

Chromebrew/chromebrew: Package manager for Chrome OS

https://github.com/chromebrew/chromebrew
1•josephscott•1h ago•0 comments

PostgreSQL Branching: Xata vs. Neon vs. Supabase

https://xata.io/blog/neon-vs-supabase-vs-xata-postgres-branching-part-1?trk=feed_main-feed-card_reshare_feed-article-content
1•gk1•1h ago•0 comments
Open in hackernews

Analyzing a Critique of the AI 2027 Timeline Forecasts

https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027
36•jsnider3•6h ago

Comments

f38zf5vdt•6h ago
I think the author is right about AI only accelerating to the next frontier when AI takes over AI research. If the timelines are correct and that happens in the next few years, the widely desired job of AI researcher may not even exist by then -- it'll all be a machine-based research feedback loop where humans only hinder the process.

Every other intellectual job will presumably be gone by then too. Maybe AI will be the second great equalizer, after death.

goatlover•5h ago
Except we have no evidence of AI being able to take over AI research anymore than we have evidence so far that automation this time will significantly reduce human labor. It's all speculation based on extrapolating what some researchers think will happen as models scale up, or what funders hope will happen as they pour more billions into the hype machine.
dinfinity•4h ago
It's also extrapolating on what already exists. We are way beyond 'just some academic theories'.

One can argue all day about timelines, but AI has progressed from being fully inexistent to a level rivaling and surpassing quite some humans in quite some things in less than 100 years. Arguably, all the evidence we have points to AI being able to take over AI research at some point in the near future.

suddenlybananas•3h ago
>surpassing quite some humans

I don't really think this is true, unless you'd be willing to say calculators are smarter than humans (or else you're a misanthrope who would do well to actually talk to other people).

spongebobstoes•2h ago
idk, if you try something like o3-pro, it's definitely smarter than a lot of people I know, for most definitions of "smarter"

Even the chatgpt voice mode is an okay conversation partner, and that's v1 of s2s

variance is still very high, but there is every indication that it will get better

will it surpass cutting edge researchers soon? I don't think in the next 2 years, but in the next 10 I don't feel confident one way or the other

pier25•3h ago
> all the evidence we have points to AI being able to take over AI research at some point in the near future.

Does it?

That's like looking at a bicycle or car and saying "all the evidence points out we'll be able to do interstellar travel in the future".

KaiserPro•5h ago
bangs head against the table.

Look, fitting a single metric to a curve and projecting from that only gets you a "model" that conforms to your curve fitting.

"proper" AI, where it starts to remove 10-15% of jobs will cause an economic blood bath.

The current rate of AI expansion requires almost exponential amounts of cash injections. That cash comes from petro-dollars and advertising sales. (and the ability of investment banks to print money based on those investment) Those sources of cash require a functioning world economy.

given that the US economy is three fox news headlines away from collapse[1] exponential money supply looks a bit dicey

If you, in the space of 2 years remove 10-15% of all jobs, you will spark revolutions. This will cause loands to be called in, banks to fail and the dollar, presently run obvious dipshits, to evaporate.

This will stop investment in AI, which means no exponential growth.

Sure you can talk about universal credit, but unless something radical changes, the people who run our economies will not consent to giving away cash to the plebs.

AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

[1] trump needs a "good" economy. If the fed, who are currently mostly independent need to raise interest rates, and fox news doesn't like it, then trump will remove it's independence. This will really raise the chance of the dollar being dumped for something else (and its either the euro or renminbi, but more likely the latter)

That'll also kill the UK because for some reason we hold ~1.2 times our GDP in US short term bonds.

TLDR: you need an exponential supply of cash for AI 2027 to even be close to working.

goatlover•5h ago
It's certainly hard to imagine the political situation in the US resulting in UBI anytime soon, while at the same time the party in control wants unregulated AI development for the next decade.
bcrosby95•4h ago
It's the '30s with no FDR in sight. It won't end well for anyone.
gensym•5h ago
> AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey - it's a science fiction story that pretends to be rigorous and predictive but in such a way that when you point out it's neither, the authors can fall back to "it's just a story".

At first I was surprised at how much traction this thing got, but this is the type of argument that community has been refining for decades and this point, and it's pretty effective on people who lack the antibodies for it.

tux3•4h ago
It's the other way around entirely: the story is the unrigorous bailey, when confronted they fall back to the actual research behind it

And you can certainly criticize the research, but you've got the motte and the bailey backwards

mitthrowaway2•4h ago
I'm very much an AI doomer myself, and even I don't think AI 2027 holds water. I find myself quite confused about what its proponents (including Scott Alexander) are even expecting to get from the project, because it seems to me like the median result will be a big loss of AI-doomer credibilty in 2028 when the talking point shifts to "but it's a long tailed prediction!"
hollerith•4h ago
Same here. I ask the reader not to react to AI 2027 by dismissing the possibility that it is quite dangerous to let the AI labs continue with their labbing.
elefanten•4h ago
This is feeling like a retread of climate change messaging. Serious problem requiring serious thought (even without “AI doom” as the scenario, just the political economic and social disruptions suffice) but being most loudly championed via aggressive timelines and significant exaggerations.

The overreaction (on both sides) to be followed by fatigue and disinterest.

adastra22•3h ago
Or maybe, just maybe, AI doom isn’t a serious problem, and the lack of credible arguments for it should be evidence of such.
098799•4h ago
Because if we're unlucky, Scott will think in the final seconds of his life as he watches the world burn "I could have tried harder and worried less about my reputation".
mitthrowaway2•2h ago
I don't think it's a matter of being worried about reputation. Making credible predictions and rigorous analysis is important in all scenarios. If superintelligence really strikes in 2027, I feel like AI 2027 would be right only by coincidence, and would probably only have detracted from safety engineering efforts in the process.
heavyset_go•3h ago
Scott will just post a ten thousand word article to deflect and his audience will reorient themselves like they always do.
mitthrowaway2•2h ago
You say "like they always do"; are there any previous examples of them always doing such?
stego-tech•3h ago
It got traction because it supported everyone’s position in some way:

* Pro-safety folks could point at it and say this is why AI development should slow down or stop

* LLM-doomer folks (disclaimer: it me) can point at it and mock its pie-in-the-sky charts and milestones, as well as its handwashing of any actual issues LLMs have at present, or even just mock the persistent BS nonsense of “AI will eliminate jobs but the economy [built atop consumer spending] will grow exponentially forever so it’ll be fine” that’s so often spewed like sewage

* AI boosters and accelerationists can point to it as why we should speed ahead even faster, because you see, everyone will likely be fine in the end and you can totes trust us to slow down and behave safely at the right moment, swearsies

Good fiction always tickles the brain across multiple positions and knowledge domains, and AI 2027 was no different. It’s a parable warning about the extreme dangers of AI, but fails to mention how immediate they are (such as already being deployed to Kamikaze drones) and ultimately wraps it all up as akin to a coin toss between an American or Chinese Empire. It makes a lot of assumptions to sell its particular narrative, to serve its own agenda.

heavyset_go•3h ago
It got traction because it hyped AI companies' products to a comical level. It's simply great marketing.
stego-tech•3h ago
Great fiction is itself great marketing. Gotta move that merch (or in AI's case, VC funding).
OgsyedIE•4h ago
I disagree with the forecast too, but your critique is off-base. The assumption that exponential cash is required assumes that subexponential capex can't chug along gradually without the industry collapsing into mass bankruptcy. Additionally, the investment cash that the likes of Softbank are throwing away comes from private holdings like pensions and has little to nothing to do with the sovereign holdings of OPEC+ nations. The reason that it doesn't hold water are the bottlenecks on compute production. TSMC is still the only supplier of anything useful for foundation model training and their expansions only appear big and/or fast if you read the likes of Forbes.
pier25•3h ago
> AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

One of the best things I've read all day.

JimDabell•3h ago
It’s not just changing economics that will derail the projections. The story gives them enough compute and intelligence to massively sway public opinion and elections, but then seems to just assume the world will just keep working the same way on those fronts. They think ASI will be invented, but 60% of the public will disapprove; I guess a successful PR campaign is too difficult for the “country of geniuses in a datacenter”?
jvalencia•5h ago
It's like the invention of the washing machine. People didn't stop doing chores, they just do it more efficiently.

Coders won't stop being, they'll just do more, compete at higher levels. The losers are the ones who won't/can't adapt.

falcor84•5h ago
I suppose that those who stayed in the washing business and competed at a higher level are the ones running their own laundromats; are they the big winners of this technological shift?
alganet•4h ago
What are you even talking about?

The article is not about AI replacing jobs. It doesn't even touch this subject.

fasthands9•3h ago
Yeah. For understandable reasons that is covered a lot too, but AI 2027 is really about the risk of self-replicating AI. Is an AI virus possible, and could it be easily stopped by humans and our military?
alganet•3h ago
Actually, the subject has shifted from discussing any specific forecast to "really, how reliable are these forecasts?"
bgwalter•4h ago
No, all washing machines were centralized in the OpenWash company. In order to do your laundry, you needed a subscription and had to send your clothes to San Francisco and back.
jgalt212•3h ago
Excellent analogy
vntok•3h ago
Exactly, it wasn't the case then with washing machines and it's not the case now with AI. Your example is pretty relevant!

Today, anyone can run SOTA open-weights models in the comfort of their home for much less than the price of a ~1929 electric washing machine ($150 then or $2,800 today).

er4hn•2h ago
That was something I struggled to understand for AI-2027. They have China nationalize DeepCent so there's only one Chinese lab. I don't understand why OpenBrain doesn't form multiple competing labs because that seems to be what happened IRL before this was written.
stego-tech•3h ago
Reading through the comments, I am so glad I’m not the only one beyond done with these stupid clapbacks between boosters and doomers over a work of fiction that conveniently ignores present harms and tangible reality in knowledge domains outside of AI - like physics, biology, economics, etc.

If I didn’t know better, it’s almost like there’s a vested interest in propping these things up rather than letting them stand freely and let the “invisible hand of the free market” decide if they’re of value.

old_man_cato•3h ago
Sometimes I feel like I'm losing my mind with this shit.

Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?

What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?

refulgentis•3h ago
Correct. Entirely.

And I'm yuge on LLMs.

It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.

As neutrally as possible, I think everyone can agree:

- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,

- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.

- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.

- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.

It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.

In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago

It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.

heavyset_go•3h ago
The point? MIRI and friends want more donations.
old_man_cato•2h ago
Well, yeah. Obviously.
shaldengeki•2h ago
No, you're wrong. They wrote the story before coming up with the model!

In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.

old_man_cato•1h ago
https://ai-2027.com/research says that:

AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?

shaldengeki•1h ago
Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".

Here is the primary author of the timelines forecast:

> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.

> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

Here is one staff member at Lightcone, the folks credited with the design work on the website:

> I think the actual epistemic process that happened here is something like:

> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon

> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world

> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to

> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

recursivecaveat•55m ago
This quote is kindof a killer for me: https://news.ycombinator.com/item?id=44065615 I mean if your prediction disagrees with your short-story, and you decide to just keep the story because changing the dates is too annoying, how seriously should anyone take you?
old_man_cato•39m ago
Ok, yeah, I take the point that one illustration did not obviously precede the other but are likely the coincident result of a worldview.

I don't think it changes anything but thanks for the correction.

shitloadofbooks•3h ago
AI proponents keep drawing perfectly straight lines from "no AI --> LLMs exist --> LLMs write some adequate code sometimes" up into the horizon of the Y axis where AIs run all governments, write all code, paint all paintings and so on.

There's a large overlap with the crypto true-believers who were convinced after seeing "no blockchain --> blockchain exists" that all laws would be enshrined in the blockchain, all business would be done with blockchains, etc.

We've had automation in the past; it didn't decimate the labour-force; it just changed how people work.

And we didn't go from handwashing clothes --> washing machines --> all flat surfaces are cleaned daily by washing robots...

refulgentis•2h ago
Would advise, generally, that AI isn't crypto.

It's easy to lapse into personifying it and caricaturing the-thing-in-toto, but then we end up at obvious absurdities - to wit:

- we're on HN, it'd be news to most readers that there's a "large overlap" of "true-believers", AI was a regular discussion topic here a loooong time before ChatGPT, even OpenAI. (been here since 2009)

- Similarly "AI proponents keep drawing perfectly straight lines...AIs run all governments, write all code, paint all paintings and so on."

The technical term would be "strawmen", I believe.

Or maybe begging the question (who are these true-believers who overlap? who are these AI proponents)

Either way, you're not likely to find these easy-to-knock-down caricatures on HN. Maybe some college hypebeast on Twitter. But not here.

mystified5016•1h ago
I have personally seen all of these people on HN.
refulgentis•23m ago
Right - more directly, asserting they're overlapping, and then asserting all members of both sets all back the same obviously-wrong argument(s) is a recipe for dull responses from ilk like me :)

I am certain you have observed N members of each set. It's the rest that doesn't follow.

gmuslera•2h ago
My main objection against this kind of predictions is that predictions (at least, the known enough ones) are part of the past that shape the future. Even doing a good extrapolation on the current trends, the prediction itself could make things diverge, converge or do something totally different, because the main decision makers will take it into account and that is not part of the trend. Specially with disruptive enough predictions that paints an undesirable future for all or most decision makers.

Unless it hits hard in some of the areas that we have cognitive biases and are not fully rational on the consequences.