frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Where Is GPT in the Chomsky Hierarchy?

https://fi-le.net/chomsky/
1•fi-le•3m ago•0 comments

Show HN: Add keyboard shortcuts to any site with a browser extension

https://github.com/one-with-violets-in-her-lap/bind
1•sleep678765•8m ago•0 comments

OpenAI Ends 'Vesting Cliff' for New Employees in Compensation-Policy Change

https://www.wsj.com/tech/ai/openai-ends-vesting-cliff-for-new-employees-in-compensation-policy-ch...
1•divbzero•11m ago•0 comments

Rat Dystopia

https://demystifysci.com/blog/2020/7/22/rat-dystopia
1•certyfreak•11m ago•0 comments

BA fears a future where AI agents pick flights and brands get ghosted

https://www.theregister.com/2025/12/13/british_airways_fears_a_future/
2•zeristor•16m ago•0 comments

Sloot Digital Coding System

https://en.wikipedia.org/wiki/Sloot_Digital_Coding_System
1•rmason•16m ago•0 comments

Will Larson Reflects on Staff Engineer [video]

https://www.youtube.com/watch?v=RBPtGtMY8bE
1•mooreds•17m ago•0 comments

Protect Earth creates and restores woodlands, meadows, and hedgerows in the UK

https://www.protect.earth
1•mooreds•18m ago•0 comments

Codewave

https://github.com/techdebtgpt/codewave
1•handfuloflight•23m ago•0 comments

IBM: What if quantum computing is as fundamental as the origin of zero?

https://www.ibm.com/think/news/is-quantum-computing-as-fundamental-as-origin-of-zero
3•donutloop•24m ago•0 comments

UK doubles down on its quantum bet

https://www.politico.eu/article/uk-government-doubles-down-quantum-physics-computing-bet-starmer/
1•donutloop•24m ago•0 comments

From sci-fi to reality: Researchers realise quantum teleportation using tech

https://cordis.europa.eu/article/id/462587-from-sci-fi-to-reality-researchers-realise-quantum-tel...
5•donutloop•25m ago•0 comments

Stacked Diffs on GitHub

https://twitter.com/jaredpalmer/status/1999525369725215106
3•pryz•26m ago•1 comments

Long Covid involves activation of proinflammatory and immune exhaustion pathways

https://idp.nature.com/authorize?response_type=cookie&client_id=grover&redirect_uri=https%3A%2F%2...
3•bookofjoe•27m ago•0 comments

RAMageddon is coming for your smartphones and laptops

https://www.tomsguide.com/computing/laptops/worsening-ram-crisis-starting-to-impact-smartphones-a...
1•walterbell•28m ago•0 comments

Securing Coolify Cluster with Tailscale

https://taner-dev.com/articles/securing-coolify
3•morenatron•30m ago•0 comments

MicroEMACS

https://en.wikipedia.org/wiki/MicroEMACS
1•doener•31m ago•0 comments

Liberal-coded economic policies lose support in polls when proposed by Trump

https://www.politico.com/interactives/2025/trump-democratic-policies-midterms-polling/
4•alephnerd•32m ago•1 comments

Developers have canceled nearly 2k power projects this year – report

https://seekingalpha.com/news/4530351-developers-have-canceled-nearly-2000-power-projects-this-ye...
1•thelastgallon•34m ago•0 comments

The <time> element should do something

https://nolanlawson.com/2025/12/14/the-time-element-should-actually-do-something/
1•todsacerdoti•37m ago•1 comments

Google's Advent of Agents

https://adventofagents.com/
2•shubham_saboo•37m ago•0 comments

Show HN: In-browser data exploration toolkit

https://github.com/Datakitpage/Datakit
2•parsabg•41m ago•0 comments

The Future of the Linux-rs Project

https://mateolafalce.github.io/2025/The%20Future%20of%20the%20Linux-rs%20Project/TheFutureoftheLi...
1•lafalce•43m ago•1 comments

Muslim hero risked his own life to save others

https://ahmedelahmed.com
3•dorongrinstein•43m ago•3 comments

Anthropic Outage for Opus 4.5 and Sonnet 4/4.5 across all services

https://status.claude.com/incidents/9g6qpr72ttbr
70•pablo24602•47m ago•38 comments

Ozymandias

https://blog.engora.com/2025/12/ozymandias.html
2•Vermin2000•47m ago•1 comments

The Plan Is the Program

https://www.proofofconcept.pub/p/the-plan-is-the-program
1•herbertl•48m ago•0 comments

AI will transform science. Just not the way you think

https://ischemist.com/writings/long-form/will-ai-transform-science
1•hiddenseal•49m ago•0 comments

My Battle with Datetimes in Prod

https://www.datacompose.io/blog/fun-with-datetimes
1•tccole•50m ago•1 comments

Distropack now supports TAR archives aside from RPM DEB and PKG

https://distropack.dev/Blog/Post?slug=introducing-tar-package-support-simple-distribution-without...
1•segfault0x23•53m ago•1 comments
Open in hackernews

Opus 4.5 is the first model that makes me fear for my job

https://old.reddit.com/r/ClaudeAI/comments/1pmgk5c/opus_45_is_the_first_model_that_makes_me_actually/
27•nomilk•2h ago

Comments

techblueberry•1h ago
It feels like every model release has its own little hype cycle. Apparently Claude 4.5 is still climbing to its peak of inflated expectations.
krackers•1h ago
There's lots of overlap between the cryptocurrency space and AI grifter hypeman space. And the economic incentives at play throw fuel on the fire.
Forgeties79•1h ago
They behave like televangelists
giancarlostoro•1h ago
I've been using Claude Code + Opus for side projects. The only thing that's changed for me dev wise is that I QA more, and think more about how to solve my problems.
nharada•1h ago
It definitely feels like a jump in capability. I've found that the long term quality of the codebase doesn't take nosedive nearly as quickly as earlier agentic models. If anything it's about steady or maybe even increasing if you prompt it correctly and ask for "cleanup PRs"
markus_zhang•1h ago
Ironically AI may replace SWE way faster than it does for any other businesses in Stone Age.

Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.

Good job boys and girls. You will be remembered.

neoromantique•1h ago
For the most part, code monkeys haven't been a thing for quite some time now, I'm sure talented people will adapt and find other avenues to flourish
pton_xd•1h ago
I have to say, it was fun while it lasted! Couldn't really have asked for a more rewarding hobby and career.

Prompting an AI just doesn't have the same feeling, unfortunately.

cmarschner•1h ago
For me it‘s the opposite. I do have a good feeling what I want to achieve, but translating this into and testing program code has always been causing me outright physical pain (and in case of C++ I really hate it). I‘ve been programming since age 10. Almost 40 years. And it feels like liberation.

It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive

markus_zhang•1h ago
Indeed. I still use AI for my side projects, but strictly limit to discussion only, no code. Otherwise what is the point? The good thing about programming is, unlike playing chess, there is no real "win/lose" in the scenario so I won't feel discouraged even if AI can do all the work by itself.

Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.

Hamuko•1h ago
Even the discussion side has been pretty meh in my mind. I was looking into a bug in a codebase filled with Claude output and for funsies decided to ask Claude about it. It basically generated a "This thing here could be a problem but there is manual validation for it" response, and when I looked, that manual validation were nowhere to be found.

There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.

I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.

markus_zhang•56m ago
I don't discuss actual code with ChatGPT, just concepts. Like "if I have an issue and my algo looks like this, how can I debug it effectively in gdb?", or "how do I reduce lock contention if I have to satisfy A/B/...".

Maybe it's just because my side projects are fairly elementary.

And I agree that AI is pretty good at code review, especially if the code contains complex business logic.

harrall•1h ago
I don’t think it’s ironic.

The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.

If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.

skybrian•1h ago
Already happened for copywriters, translators, and others in the tech industry:

https://www.bloodinthemachine.com/s/ai-killed-my-job

agumonkey•1h ago
something in the back of my head tells me that automating (partial) intelligence feels different than automating a small to medium scope task, maybe i'm wrong though
Lionga•1h ago
Dario Amodei claimed "AI will replace 90% of developers within 6 months" about a year ago. Still they are just loosing money and will probably will be forever while just producing more slop code that needs even more devs to fix it.

Good job AI fanboys and girls. You will be remembered when this fake hype is over.

markus_zhang•55m ago
I'm more of a doomsayer than a fan boy. But I think it's more like "AI will replace 50% of your juniors and 25% of your seniors and perhaps 50% of your do-nothing middle managers", And that's a fairly large number anyway.
agumonkey•1h ago
time to become a solar installer
iSloth•1h ago
Not sure I’d be worried for my job, but it’s legitimately a significant jump in capabilities, even if other models attempt to fudge higher bench results
terabytest•1h ago
How does Opus 4.5 compare to gpt-5.1-codex-max?
scosman•1h ago
roughly, much better: https://www.swebench.com
heavyset_go•1h ago
This is the new goalpost now that the "this model is so intelligent that it's sentient and dangerous" AGI hype has died down.
themafia•1h ago
Reads like astroturf to me.

> do not know what's coming for us in the next 2-3 years, hell, even next year might be the final turning point already.

What is this based on? Research? Data? Gut feeling?

> but how long will it be until even that is not needed anymore?

You just answered that. 2 to 3 years, hell, even next year, maybe.

> it also saddens me knowing where all of this is heading.

If you know where this is heading why are you not investing everything you have in these companies? Isn't that the obvious conclusion instead of wringing your hands over the loss of a coding job?

It invents a problem, provides a time line, immediately questions itself, and then confidently prognosticates without any effort to explain the information used to arrive at this conclusion.

What am I supposed to take from this? Other than that people are generally irrational when contemplating the future?

gtowey•1h ago
We have reached the "singularity of marketing". It's what happens when an AI model has surpassed human marketers and traditional bot farms and can be used to do its own astroturfing. And then with the investment frenzy it generates, we can build the next generation of advertising intelligence and achieve infinite valuation!
jchw•1h ago
Remember when GPT-3 came out and everybody collectively freaked the hell out? That's how I've felt watching the reaction to any of the new model releases lately that make any progress.

I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)

exabrial•1h ago
This just looks like an advertisement?
jsheard•1h ago
It's just a normal Reddit account which was dormant until two weeks ago when it suddenly started spamming threads exclusively about AIs imminent destruction of the job market. Nothing to see here!

https://www.reddit.com/r/ClaudeAI/comments/1pe6q11/deep_down...

https://www.reddit.com/r/ClaudeAI/comments/1pb57bm/im_honest...

https://www.reddit.com/r/ChatGPT/comments/1pm7zm4/ai_cant_ev...

https://www.reddit.com/r/ArtificialInteligence/comments/1plj...

https://www.reddit.com/r/ArtificialInteligence/comments/1pft...

https://www.reddit.com/r/AI_Agents/comments/1pb6pjz/im_hones...

https://www.reddit.com/r/ExperiencedDevs/comments/1phktji/ai...

https://www.reddit.com/r/csMajors/comments/1pk2f7b/ (cached title: Your CS degree is worthless. Switch over. Now.)

quantumHazer•1h ago
I wouldn’t be surprised if this is undisclosed PR from Anthropic
_wire_•1h ago
"This model is so alive I want to donate a kidney to it!"
bgwalter•1h ago
It's a Misanthropic propaganda forum. They even have Claude agree in the summary of the bot comments:

"The overwhelming consensus in this thread is that OP's fear is justified and Opus represents a terrifying leap in capability. The discussion isn't about if disruption is coming, but how severe it will be and who will survive."

My fellow Romans, I come here not to discuss disruption, but to survive!

kami8845•1h ago
Same here. Using it this week and on Thursday I began to understand why Lee Sedol retired not long after being defeated by AlphaGo. For the stuff I'm good at, 3 months ago I was better than the models. Today, I'm not sure.
simonw•1h ago
> Sure, I can watch Opus do my work all day long and make sure to intervene if it fucks up here and there, but how long will it be until even that is not needed anymore?

Right: if you expect your job as a software developer to be effectively the same shape on a year or two you're in for a bad time.

But humans can adapt! Your goal should be to evolve with the tools that are available. In a couple of years time you should be able to produce significantly more, better code, solving more ambitious profiles and making you more valuable as a software professional.

That's how careers have always progressed: I'm a better, faster developer today than I was two years ago.

I'll worry for my career when I meet a company that has a software roadmap that they can feasibly complete.

th0ma5•1h ago
I just wanted to say I think it is doing people a great disservice to advocate for these specific kinds of tools, but the last paragraph is a universally correct statement, seemingly permanently.
Aayush28260•1h ago
Honeslty I have a lot of friends who are studying SWE and they are saying the same thing do you guys think that if they do get replaced they'll still be needed to maintin the AI's.
yellow_lead•1h ago
I tried it and I'm not impressed.

In threads where I see an example of what the author is impressed by, I'm usually not impressed. So when I see something like this, where the author doesn't give any examples, I also assume Claude did something unimpressive.

paulddraper•1h ago
Opus 4.5 is like a couple points higher then Sonnet 4.5 on the SWE benchmark.
Aperocky•1h ago
It's almost vindication for where I work an SDE needs to do everything, infra, development, deployment, launch, operations. There's no dedicated QA, test or operations on a product level, and while AI helped a great deal it's pretty clear it cannot replace me at least within the next 2 to 3 iterations.

If I was only writing code, the fear would be completely justified.

prymitive•1h ago
There are still a few things missing from all models: taste, shame and ambition. Yes they can write code, but they have no idea what needs does that code solve, what a good UX looks like and what not to ship. Not to mention that they all eventually go down rabbit holes of imaginary problems that cannot be solve (because they’re not real), and do where they will spend eternity unless w human says stop it right now.
heavyset_go•22m ago
They have a severe lack of wisdom, as well.
heckintime•1h ago
I used Claude Code to write a relatively complicated watchOS app. I know how to program (FAANG L5), but didn't really know Swift. I achieved a pretty good result for about $600, while a contractor would've cost much more.
agumonkey•1h ago
so how long until our salaries match those of an llm ?
heckintime•56m ago
Good question. I left my job to start something on my own so an AI help is really nice. Should note that AI does make many boneheaded mistakes, and I have to solve some of the harder problems on my own.
fragmede•29m ago
Isn't Claude Max only $200 how come you paid $600 for that?
abrichr•7m ago
You can reach much higher spend through the API (which you can configure `$claude` to use)
hecanjog•1h ago
I used claude code for a while in the summer, took a vacation from LLMs and I'm trying it out again now. I've heard the same thing about Opus 4.5, but my experience with claude code so far is the same as it was this summer... I guess if you're a casual user don't get too excited?
quantumHazer•1h ago
Why are we commenting the Claude subreddit?

1) it’s not impartial

2) it’s useless hype commentary

3) it’s literally astroturfing at this point

AndyKelley•1h ago
flagged for astroturfing
channel_t•1h ago
Almost every single post on the ClaudeAI subreddit is like this. I use Opus 4.5 in my day to day work life and it has quickly become my main axe for agentic stuff but its output is not a world-shattering divergence from Anthropic's previous, also great iterations. The religious zealotry I see with these things is something else.
epolanski•25m ago
I suspect that recurring visitors of that subreddit may not be the greatest IT professionals, but a mixture of juniors (even those with 20 years of experience but still junior) and vibe coders.

Otherwise, with all due respect, there's very little of value to learn in that subreddit.

outside1234•1h ago
OpenAI is burning through $60B a year in losses.

Something doesn't square about this picture: either this is the best thing since sliced bread and it should be wildly profitable, or ... it's not, and it's losing a lot of money because they know there isn't a market at a breakeven price.

simonw•57m ago
They're losing money because they are in a training arms race. If other companies weren't training competitive models OpenAI would be making a ton of money by now.

They have several billion dollars of annual revenue already.

outside1234•34m ago
Google is always going to be training a new model and are doing so while profitable.

If OpenAI is only going to be profitable (aka has an actual business model) if other companies aren't training a competitive model, then they are toast. Which is my point. They are toast.

throw310822•29m ago
I think it's also a cultural thing... I mean it takes time for companies and professionals to get used to the idea that it makes sense to pay hundreds of dollars per month to use an AI. That that expense (that for some is relatively affordable and for other can be a serious one) actually converts in much higher productivity or quality.
uniclaude•1h ago
That’s only tangentially related but I have a very hard time using Opus for anything serious. Sonnet is still much more useful to me thanks to the context window size. By the moment Opus actually understands what’s needed, I’m n compactions deep and pretty much hoping for the best.

That’s a reason why I can’t believe the benchmarks and why I also believe open source models (claiming 200 but realistically struggling past 40k) aren’t only a bit but very far behind SOTA in actual software dev.

This is not true for all software, but there are types of systems or environments where it’s abundantly clear that Opus (or anything with a sub 1m window) won’t cut it, unless it has a very efficient agentic system to help.

I’m not talking about dumping an entire code base in the context, I’m talking about clear specs, some code, library guidelines, and a few elements to allow the LLM to be better than a glorified autocomplete that lives in an electron fork.

Sonnet still wins easily.

int32_64•1h ago
I wonder if these coding models will be sustainable/profitable long term if the local models continue to improve.

qwen3-coder blew me away.

crystal_revenge•46m ago
I've mainly been using Sonnet 4.5 so decided to give Opus 4.5 a whirl to see if could solve an annoying task I've been working on that Sonnet 4.5 absolutely fails on. Just started with "Are you familiar with <task> and can you help me?" and so far the response has been a resounding:

> Taking longer than usual. Trying again shortly (attempt 1 of 10)

> ...

> Taking longer than usual. Trying again shortly (attempt 10 of 10)

> Due to unexpected capacity constraints, Claude is unable to respond to your message. Please try again soon.

I guess I'll have to wait until later to feel the fear...

wdb•28m ago
There are currently issues with the models. Claude Code doesn't work at all for me