frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

DraftKings hopes to score big with new prediction markets app

https://www.cbsnews.com/news/draftkings-prediction-markets-app-sports-betting/
2•mhb•4m ago•0 comments

Laws That Do Harm (1982)

https://miltonfriedman.hoover.org/internal/media/dispatcher/214279/full
2•mhb•8m ago•0 comments

From Zero to RAG (Part 1)

https://turtosa.com/blog/from-zero-to-rag
1•kevinroleke•9m ago•0 comments

Google and Apple warn employees on visas to avoid international travel

https://techcrunch.com/2025/12/20/google-and-apple-reportedly-warn-employees-on-visas-to-avoid-in...
4•SilverElfin•9m ago•0 comments

Climate change's hidden price tag: a drop in our income

https://news.arizona.edu/news/climate-changes-hidden-price-tag-drop-our-income
1•geox•13m ago•1 comments

HoustonTracker2 – A Music Sequencer for the Texas TI-82

https://www.irrlichtproject.de/houston/
1•austinallegro•13m ago•0 comments

TailwindSQL: Like TailwindCSS but SQL.className your way to database queries

https://tailwindsql.xyz/
1•sawirricardo•15m ago•0 comments

This is a duplicate. Please delete it.

https://community.ntppool.org/t/ntp-at-nist-boulder-has-lost-power/4192
1•nobody9999•18m ago•1 comments

HBM Supply Curve Gets Steeper, but Still Can't Meet Demand

https://www.nextplatform.com/2025/12/19/hbm-supply-curve-gets-steeper-but-still-cant-meet-demand/
1•rbanffy•19m ago•0 comments

U.S. Plans $80B Nuclear Power Expansion

https://spectrum.ieee.org/80-billion-us-nuclear-power
2•rbanffy•21m ago•1 comments

When creating images, AI keeps remixing the same 12 stock photo clichés

https://www.science.org/content/article/when-creating-images-ai-keeps-remixing-same-12-stock-phot...
1•rbanffy•22m ago•0 comments

C-reactive protein outpaced 'bad' cholester as leading heart disease risk marker

https://theconversation.com/how-c-reactive-protein-outpaced-bad-cholesterol-as-leading-heart-dise...
3•bikenaga•24m ago•0 comments

STPA (System Theoretic Process Analysis) at Google

https://sre.google/resources/practices-and-processes/stpa/
1•motxilo•28m ago•0 comments

Rcarmo/Guerite: A Watchtower Replacement

https://github.com/rcarmo/guerite
1•rcarmo•30m ago•0 comments

OpenWRT 25.12.0-RC1 Released

https://downloads.openwrt.org/releases/25.12.0-rc1/
2•josteink•36m ago•0 comments

OpenWRT 24.10.5 Released

https://openwrt.org/releases/24.10/notes-24.10.5
2•josteink•38m ago•0 comments

Why the fuel-switch story does not explain the AI171 crash

https://frontline.thehindu.com/the-nation/ai-171-crash-boeing-787-electrical-failure-core-network...
1•sltr•38m ago•1 comments

Show HN: Calcu-gator.com – Financial calculators for Canadians

https://calcu-gator.com/
2•Nitromax•42m ago•0 comments

Monte Carlo Cubes

https://thevesselshortstories.substack.com/p/monte-carlo-cubes
1•kawrydav•46m ago•0 comments

I wrote a code editor in C and now I'm a changed man

https://github.com/thisismars-x/light
10•birdculture•47m ago•2 comments

Show HN: Prove your compliance posture with automated evidence (OSCAL)

https://github.com/clay-good/attestful
1•hireclay•47m ago•0 comments

I built a tool to do my bookkeeping for me (freelancer)

https://billpal.io/
2•romanleeb•49m ago•1 comments

FrontierScience Benchmark by OpenAI

https://openai.com/index/frontierscience/
2•mustaphah•51m ago•0 comments

Show HN: SolarSystem, a Solarized-like theme generator using OKHSL and APCA

https://solarsys.dev/
1•zacharyvoase•55m ago•0 comments

More databases should be single-threaded

https://blog.konsti.xyz/p/8c8a399f-8cfe-47dd-9278-9527105d07dc/
3•lawrencechen•56m ago•0 comments

Titan's strong tidal dissipation precludes a subsurface ocean

https://www.sciencedaily.com/releases/2025/12/251220104621.htm
2•gradus_ad•56m ago•0 comments

SearchArray – rethinking full text search [video]

https://www.youtube.com/watch?v=wJ3RCV338DA
3•softwaredoug•59m ago•0 comments

Timekeeping on Mars

https://en.wikipedia.org/wiki/Timekeeping_on_Mars
1•d_silin•1h ago•2 comments

Advanced Tools – Bringing Anthropic's advanced tool use to any LLM provider

https://github.com/hetpatel-11/advanced-tools
1•hkpatel•1h ago•1 comments

GitHub Wrapped – enter username and get video of your 2025-coding stat

https://app.aipodcast.ing/utils/github-wrapped
1•adithyan_win•1h ago•0 comments
Open in hackernews

Ask HN: Are you afraid of AI making you unemployable within the next few years?

5•johnwheeler•1h ago
On Hacker News and Twitter, the consensus view is that no one is afraid. People concede that junior engineers and grad students might be the most affected. But, they still seem to hold on to their situations as being sustainable. My question is, is this just a part of wishful thinking and human nature, trying to combat the inevitable? The reason I ask is because I seriously don't see a future where there's a bunch of programmers anymore. I see mass unemployment for programmers. People are in denial, and all of these claims that the AI can't write code without making mistakes are no longer valid once an AI is released potentially overnight, that writes flawless code. Claude 4.5 is a good example. I just really don't see any valid arguments that the technology is not going to get to a point where it makes the job irrelevant, not irrelevant, but completely changes the economics.

Comments

uberman•1h ago
I use Claude 4.5 almost every day. It makes mistakes every day. The worst mistakes are the ones that are not obvious and only by careful review do you see the flaws. At the moment, even the best AI cant be reliable event to make modest refactoring. What AI does at the moment is make senior developers worth more and junior developers worth less. I am not at all worried about my own job.
johnwheeler•1h ago
Thank you for your response. This is exactly the type of commentary I'm talking about. The key phrase is "at the moment." It's not that developers will be replaced, but there will be far less need for developers, is what I think.

I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.

I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.

ThrowawayR2•1h ago
If anybody who disagrees with your assessment is "in denial (sic)", why should people bother responding to your question seriously?
johnwheeler•13m ago
it's not about people disagreeing with my assessment. It's that people keep saying, "I'm not afraid of AI because it makes mistakes." That's the main argument I've heard. I don't know if those people are ignorant, arrogant, or in denial. Or maybe they're right. I don't know. But I don't think they're right. Human nature leads me to believe they're in denial. Or they're ignorant. I don't think there's necessarily any shame in being in denial or ignorant. They don't know or see what I see.

I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that (although I do think I'm probably better at prompting than most).

The two things I hear are:

1. You'll always need a human in the loop

2. AI isn't any good at writing code

The first one sounds more plausible, but it means less programmers over time.

nness•1h ago
Largely, no.

AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.

Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')

If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.

johnwheeler•1h ago
I guess the main thing people aren't taking into account from what I see is that the models are substantially improving. Claude Opus 4.5 is markedly better than Claude Sonnet 3.7. If the jump to version 5 represents such a leap, I see it is game over, pretty much. You'll just need one person to manage all your systems or the subsystems, if the entire system is extremely large. And then I can't think past that. I don't know how long it is before AI replaces that central orchestrator and takes the human out of the loop, or if it ever does, that's what they seem to want it to do.

Anyway, I appreciate the response. I don't know how old you are, but I'm kind of old. And I've noticed that I've become much more cynical and pessimistic, not necessarily for any good reasons. So maybe it's just that.

websiteapi•44m ago
you're assuming the growth will continue at the same rate, which is hardly a certainty
diamondap•59m ago
> Humans are objectively quite cheap.

I disagree with that statement when it comes to software developers. They are actually quite expensive. The typically enter the workforce with 16 years of education (assuming they have a college degree), and may also have a family and a mortgage. They have relatively high salaries, plus health insurance, and they can't work when they're sleeping, sick or on vacation.

I once worked for a software consultancy where the owner said, "The worst thing about owning this kind of company is that all my capital walks out the door at six p.m."

AI won't do that. It'll work round the clock if you pay for it.

We do still need a human in the loop with AI. In part, that's to check and verify its work. In part, it's so the corporate overlords have someone to fire when things go wrong. From the looks of things right now, AI will never be "responsible" for its own work.

diamondap•1h ago
I think AI will substantially thin out the ranks of programmers over the next five years or so. I've been very impressed with Claude 4.5 and have been using it daily at work. It tends to produce very good, clean, well-documented code and tests.

It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.

In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.

But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.

I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?

We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?

I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.

The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?

benoau•55m ago
I think people should be very afraid: the jobs are only safe if it peaks in adoption and stops improving, but it shows no signs of slowing.
gitgud•44m ago
No, managers don’t want to be using Claude Code… tools change
wrxd•36m ago
As much as I would like my job to be exclusively about writing code, the reality is that the majority of it is:

- talking to people to understand how to leverage their platform and to get them to build what I need

- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t

- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem

In short, I can think and I can learn. LLMs can’t.

Oras•17m ago
Well, with things like skills and proper memory, these things can become better. Remember 2 years ago when AI coding wasn’t even a thing?

You’re right it wouldn’t replace everyone, but businesses will need less people to maintain.

johnwheeler•5m ago
right, I think in the near term, the worry isn't about replacing people wholesale but just replacing most or more people and causing serious economic disruption. In the limit, you would have a CEO who commands the AI to do everything, but that seems less plausible
SonOfKyuss•7m ago
> telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem

This one is huge. I’ve personally witnessed many situations where a multi-million dollar mistake was avoided by a domain expert shutting down a bad idea. Good leadership recognizes this value. Bad leadership just looks at how much code you ship

cjs_ac•27m ago
The AI providers' operations remain heavily subsidised by venture capital. Eventually those investors will turn around and demand a return on their investment. The big question is, when that happens, whether LLMs will be useful enough to customers to justify paying the full cost of developing and operating them.

That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.