frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

European Council president warns US not to interfere in Europe's affairs

https://www.theguardian.com/world/2025/dec/08/europe-leaders-no-longer-deny-relationship-with-us-...
1•mohi-kalantari•20s ago•0 comments

A Government Shutdown and a 1913 Data Assumption Caused an Outage in 2025

https://heyoncall.com/blog/total-real-returns-outage-government-shutdown
1•compumike•39s ago•0 comments

Saturday Morning Network Outage

https://devnonsense.com/posts/saturday-morning-network-outage/
1•speckx•41s ago•0 comments

A New Approach to GPU Sharing: Deterministic, SLA-Based GPU Kernel Scheduling

1•medicis123•1m ago•0 comments

Health insur. premiums rose nearly 3x rate of worker earnings over past 25 years

https://theconversation.com/health-insurance-premiums-rose-nearly-3x-the-rate-of-worker-earnings-...
1•bikenaga•2m ago•0 comments

Gilt Futurism

https://planetocracy.org/p/gilt-futurism
1•rbanffy•4m ago•0 comments

Public libraries in TX, LA, and MS no longer protected by the First Amendment

https://lithub.com/public-libraries-in-tx-la-and-ms-are-no-longer-protected-by-the-first-amendment/
1•stopbulying•5m ago•0 comments

Binary patching live audio software to fix a show-stopping bug

https://jonathankeller.net/ctf/playback/
1•NobodyNada•7m ago•0 comments

Microsoft will no longer use engineers in China for Department of Defense work

https://techcrunch.com/2025/07/19/microsoft-says-it-will-no-longer-use-engineers-in-china-for-dep...
3•WaitWaitWha•8m ago•0 comments

Elon Musk appeared on EU Parliament employee list with internal email address

https://www.euractiv.com/news/mail-for-musk-elon-shows-up-on-parliament-employee-list/
4•giuliomagnifico•9m ago•0 comments

Missionary Accountants

https://postround.substack.com/p/missionary-accountants
1•akharris•11m ago•0 comments

The Area 51 of New England

https://www.nytimes.com/2025/12/06/movies/strange-arrivals-betty-barney-hill.html
2•bookofjoe•11m ago•1 comments

An MCP that lets you play DOOM in ChatGPT

https://old.reddit.com/r/mcp/comments/1pexcic/an_mcp_that_lets_you_play_doom_in_chatgpt/
1•the_arun•11m ago•0 comments

Oseberg longship, built by Vikings, completes its final voyage

https://www.npr.org/2025/10/02/nx-s1-5550149/viking-age-oseberg-longship-oslo
1•throwoutway•13m ago•0 comments

Add, delete and move data points to create a particular regression line

https://line-fitter--johnhorton.replit.app/
1•john_horton•13m ago•0 comments

The Zero Point of Narcissism: A Developmental Pathway Without Mirroring

https://zenodo.org/records/17857386
1•MyResearch•14m ago•0 comments

U.S. to allow export of H200 chips to China

https://www.semafor.com/article/12/08/2025/commerce-to-open-up-exports-of-nvidia-h200-chips-to-china
2•nextworddev•15m ago•0 comments

Ask HN: How valuable is a domain like messenger.new?

1•darkhorse13•16m ago•1 comments

Why UBI is a trap and Universal Basic Equity is the fix

https://medium.com/@augustinsayer/why-ai-may-be-marxs-inadvertent-vindication-85689d8ad733
2•gussayer•19m ago•0 comments

Thoughts of a Neopagan / the Reconstruction of Neolithic Religion

1•5wizard5•20m ago•0 comments

Pyversity with Thomas van Dongen (Springer Nature)

1•CShorten•20m ago•0 comments

Affordances: The Missing Layer in Front End Architecture

https://fractaledmind.com/2025/12/01/ui-affordances/
1•Kerrick•22m ago•0 comments

(mis)Translating the Buddha (2020)

http://neuroticgradientdescent.blogspot.com/2020/01/mistranslating-buddha.html
1•eatitraw•23m ago•1 comments

The History of Xerox – A Monochromatic Star

https://www.abortretry.fail/p/the-history-of-xerox
1•rbanffy•26m ago•0 comments

Deprecations via warnings don't work for Python libraries

https://sethmlarson.dev/deprecations-via-warnings-dont-work-for-python-libraries
2•scolby33•26m ago•1 comments

Even rentals in San Francisco have bidding wars

https://sfstandard.com/2025/12/08/sf-apartment-rentals-bidding-wars/
2•randycupertino•26m ago•1 comments

WSJ article on Tennessee munitions plant explosion exposes an industry

https://www.wsws.org/en/articles/2025/11/19/qvwk-n19.html
3•PaulHoule•26m ago•1 comments

A history of AI in two line paper summaries (part one)

https://xquant.substack.com/p/what-if-we-simply-a-history-of-ai
1•nb_quant•27m ago•0 comments

Show HN: Kernel-Cve

https://www.kernelcve.com/
1•letmetweakit•27m ago•1 comments

Is This the End of the Free World?

https://paulkrugman.substack.com/p/is-this-the-end-of-the-free-world
3•rbanffy•28m ago•1 comments
Open in hackernews

AI should only run as fast as we can catch up

https://higashi.blog/2025/12/07/ai-verification/
32•yuedongze•1h ago

Comments

rogerkirkness•1h ago
Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.
airstrike•51m ago
> And realistically in a year, there won't be any people who can keep up.

Bold claim. They said the same thing at the start of this year.

adventured•12m ago
You're all arguing over how many single digit years it'll take at this point.

It doesn't matter if it takes another 12 or 36 months to make that claim true. It doesn't matter if it takes five years.

Is AI coming for most of the software jobs? Yes it is. It's moving very quickly, and nothing can stop it. The progress has been particularly exceptionally clear (early GPT to Gemini 3 / Opus 4.5 / Codex).

bdangubic•9m ago
> Is AI coming for most of the software jobs?

be cool to start with one before we move to most…

yuedongze•46m ago
im hoping this can introduce a framework to help people visualize the problem and figure out a way to close that gap. image generation is something every one can verify, but code generation is perhaps not. but if we can make verifying code as effortless as verifying images (not saying it's possible), then our productivity can enter the next level...
drlobster•42m ago
I think you underestimating how good these image generators are at the moment.
yuedongze•41m ago
oh i mean the other direction! checking if a generated image is "good" that no one will tell something is off and it look naturally, rather than checking if they are fake.
dontlikeyoueith•29m ago
> And realistically in a year, there won't be any people who can keep up.

I've heard the same claim every year since GPT-3.

It's still just as irrational as it was then.

adventured•15m ago
You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.

They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.

Arainach•7m ago
People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.
gradus_ad•48m ago
The proliferation of nondeterministically generated code is here to stay. Part of our response must be more dynamic, more comprehensive and more realistic workload simulation and testing frameworks.
yuedongze•43m ago
i've seen a lot of startups that use AI to QA human work. how about the idea of use humans to QA AI work? a lot of interesting things might follow
Aldipower•40m ago
Sounds inhuman.
A4ET8a8uTh0_v2•36m ago
Nah, sounds like management, but I am repeating myself. In all seriousness, I have found myself having to carefully rein some of similar decisions in. I don't want to get into details, but there are times I wonder if they understand how things really work or if people need some 'floor' level exposure before they just decree stuff.
quantummagic•35m ago
As an industry, we've been doing the same thing to people in almost every other sector of the workforce, since we began. Automation is just starting to come for us now, and a lot of us are really pissed off about it. All of a sudden, we're humanitarians.
__loam•36m ago
No thanks.
adventured•21m ago
A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that AI does. If a median software developer job paid $125k before, it'll shift to $65k-$85k type AI babysitting work after.
colechristensen•12m ago
Yes, but not like what you think. Programmers are going to look more like product managers with extra technical context.

AI is also great at looking for its own quality problems.

Yesterday on an entirely LLM generated codebase

Prompt: > SEARCH FOR ANTIPATTERNS

Found 17 antipatterns across the codebase:

And then what followed was a detailed list, about a third of them I thought were pretty important, a third of them were arguably issues or not, and the rest were either not important or effectively "this project isn't fully functional"

As an engineer, I didn't have to find code errors or fix code errors, I had to pick which errors were important and then give instructions to have them fixed.

OptionOfT•20m ago
I disagree. I think we're testing it, and we haven't seen the worst of it yet.

And I think it's less about non-deterministic code (the code is actually still deterministic) but more about this new-fangled tool out there that finally allows non-coders to generate something that looks like it works. And in many cases it does.

Like a movie set. Viewed from the right angle it looks just right. Peek behind the curtain and it's all wood, thinly painted, and it's usually easier to rebuild from scratch than to add a layer on top.

CGMthrowaway•46m ago
> AI should only run as fast as we can catch up

Good principle. This is exactly why we research vaccines and bioweapons side by side in the labs, for example.

yannyu•20m ago
I think there's a lot of utility to current AI tools, but it's also clear we're in a very unsettled phase of this technology. We likely won't see for years where the technology lands in terms of capability or the changes that will be made to society and industry to accommodate.

Somewhat unfortunately, the sheer amount of money being poured into AI means that it's being forced upon many of us, even if we didn't want it. Which results in a stark, vast gap like the author is describing, where things are moving so fast that it can feel like we may never have time to catch up.

And what's even worse, because of this industry and individuals are now trying to have the tool correct and moderate itself, which intuitively seems wrong from both a technical and societal standpoint.

cons0le•19m ago
I directly asked gemini how to get world peace. It said the world should prioritize addressing climate change, inequality, and discrimination. Yeah - we're not gonna do any of that shit. So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it for the basic big picture stuff. Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI
PunchyHamster•18m ago
I dunno, many people have that weird, unfounded trust in what AI says, more than in actual human experts it seems
bilbo0s•12m ago
Because AI, or rather, an LLM, is the consensus of many human experts as encoded in its embedding. So it is better, but only for those who are already expert in what they're asking.

The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding. If you don't, you'll just get bunk. (I know it's popular to call AI bunk "hallucinations" these days, but really if it was being spouted by a half wit human we'd just call it "bunk".)

So you really have to be an expert in order to maximize your use of an LLM. And even then, you'll only be able to maximize your use of that LLM in the field in which your expertise lies.

A programmer, for instance, will likely never be able to ask a coherent enough question about economics or oncology for an LLM to give a reliable answer. Similarly, an oncologist will never be able to give a coherent enough software specification for an LLM to write an application for him or her.

That's the achilles heel of AI today as implemented by LLMs.

jackblemming•9m ago
> is the consensus of many human experts as encoded in its embedding

That’s not true.

potsandpans•10m ago
I don't believe that this is going to happen, but the primary arguments revolving around a "super intelligent" ai involve removing the need for us to listen to it.

A super intelligent ai would have agency, and when incentives are not aligned would be adversarial.

In the caricature scenario, we'd ask, "super ai, how to achieve world peace?" It would answer the same way, but then solve it in a non-human centric approach: reducing humanities autonomy over the world.

Fixed: anthropogenic climate change resolved, inequality and discrimination reduced (by reducing population by 90%, and putting the rest in virtual reality)

ASalazarMX•6m ago
> I don't know what the point of "super intelligent" AI is if we aren't going to even listen to it

Because you asked the wrong question. The most likely question would be "How do I make a quadrillion dollars and humiliate my super rich peers?".

But realistically, it gave you an answer according to its capacity. A real super intelligent AI, and I mean oh-god-we-are-but-insects-in-its-shadow super intelligence, would give you a roadmap and blueprint, and it would take account for our deep-rooted human flaws, so no one reading it seriously could dismiss it as superficial. in fact, anyone world elite reading it would see it as a chance to humiliate their world elite peers and get all the glory for themselves.

You know how adults can fool little children to do what they don't want to? We would be the toddlers in that scenario. I hope this hypothetical AI has humans in high regard, because that would be the only thing saving us from ourselves.

blauditore•14m ago
All these engineers who claim to write most code through AI - I wonder what kind of codebase that is. I keep on trying, but it always ends up producing superficially okay-looking code, but getting nuances wrong. Also fails to fix them (just changes random stuff) if pointed to said nuances.

I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.

jascha_eng•8m ago
Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.

Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.

yuedongze•4m ago
indeed, i see verification debt outweighing tradition tech debt very very soon...