frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Codex App

https://openai.com/index/introducing-the-codex-app/
70•meetpateltech•40m ago•39 comments

Ask HN: Who is hiring? (February 2026)

143•whoishiring•2h ago•165 comments

Linux From Scratch Ends SysVinit Support

https://lists.linuxfromscratch.org/sympa/arc/lfs-announce/2026-02/msg00000.html
41•cf100clunk•58m ago•21 comments

Advancing AI Benchmarking with Game Arena

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/kaggle-game-arena-updates/
25•salkahfi•54m ago•6 comments

Nano-vLLM: How a vLLM-style inference engine works

https://neutree.ai/blog/nano-vllm-part-1
162•yz-yu•5h ago•21 comments

4x faster network file sync with rclone (vs rsync) (2025)

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/
157•indigodaddy•3d ago•68 comments

Geologists may have solved mystery of Green River's 'uphill' route

https://phys.org/news/2026-01-geologists-mystery-green-river-uphill.html
94•defrost•5h ago•20 comments

They lied to you. Building software is hard

https://blog.nordcraft.com/they-lied-to-you-building-software-is-really-hard
60•xiaohanyu•3d ago•30 comments

EPA Advances Farmers' Right to Repair

https://www.epa.gov/newsreleases/epa-advances-farmers-right-repair-their-own-equipment-saving-rep...
9•bilsbie•35m ago•0 comments

Hacking Moltbook: The AI Social Network Any Human Can Control

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
38•galnagli•2h ago•17 comments

Being sane in insane places (1973) [pdf]

https://www.weber.edu/wsuimages/psychology/FacultySites/Horvat/OnBeingSaneInInsanePlaces.PDF
23•dbgrman•1h ago•6 comments

Todd C. Miller – sudo Maintainer for over 30 years

https://www.millert.dev/
69•wodniok•1h ago•43 comments

Ask HN: Who wants to be hired? (February 2026)

38•whoishiring•2h ago•88 comments

My fast zero-allocation webserver using OxCaml

https://anil.recoil.org/notes/oxcaml-httpz
109•noelwelsh•7h ago•36 comments

Defeating a 40-year-old copy protection dongle

https://dmitrybrant.com/2026/02/01/defeating-a-40-year-old-copy-protection-dongle
782•zdw•21h ago•243 comments

IsoCoaster – Theme Park Builder

https://iso-coaster.com/
47•duck•3d ago•6 comments

Valanza – my Unix way for weight tracking and anlysis

https://github.com/paolomarrone/valanza
17•lallero317•4d ago•4 comments

Claude Code is suddenly everywhere inside Microsoft

https://www.theverge.com/tech/865689/microsoft-claude-code-anthropic-partnership-notepad
239•Anon84•6h ago•336 comments

Solvingn the Santa Claus concurrency puzzle with a model checker

https://wyounas.github.io/puzzles/concurrency/2026/01/10/how-to-help-santa-claus-concurrently/
11•simplegeek•3d ago•0 comments

My iPhone 16 Pro Max produces garbage output when running MLX LLMs

https://journal.rafaelcosta.me/my-thousand-dollar-iphone-cant-do-math/
397•rafaelcosta•21h ago•183 comments

Kernighan on Programming

83•chrisjj•2h ago•18 comments

Termux

https://github.com/termux/termux-app
294•tosh•7h ago•143 comments

Hypergrowth isn’t always easy

https://tailscale.com/blog/hypergrowth-isnt-always-easy
98•usrme•2d ago•41 comments

Apple's MacBook Pro DFU port documentation is wrong

https://lapcatsoftware.com/articles/2026/2/1.html
181•zdw•15h ago•68 comments

Library of Juggling

https://libraryofjuggling.com/
86•tontony•10h ago•23 comments

Show HN: Wikipedia as a doomscrollable social media feed

https://xikipedia.org
374•rebane2001•18h ago•126 comments

Show HN: Stelvio – Ship Python to AWS

https://stelvio.dev/
24•michal-stlv•3h ago•14 comments

Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation

https://github.com/gavrielc/nanoclaw
484•jimminyx•19h ago•190 comments

Ratchets in software development (2021)

https://qntm.org/ratchet
105•nvader•4d ago•36 comments

Ian's Shoelace Site

https://www.fieggen.com/shoelace/
351•righthand•1d ago•67 comments
Open in hackernews

My five stages of AI grief

https://dev-tester.com/my-five-stages-of-ai-grief/
18•mijustin•2h ago

Comments

aurareturn•1h ago
Many HN commentators went through the same thing over the last 3 years. You'd find plenty of skeptics in 2023 and 2024 comments. First half of 2025 was the anger stage. Later half of 2025 was full on bargaining stage when models like GPT5.2 and Opus 4.5 were released. In 2026, people are in depression stage.

I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.

Edit: Based on comments here, it seems like HN is still mostly at the anger stage.

rootnod3•1h ago
I must be using it wrong then or using the wrong languages. All I have seen it produce so far was mediocre at best and painfully wrong 2-3 prompts in.

Not even starting with how it just “fixes” a hug by introducing a wholly new one and then re-introducing the old one when pointing it out.

aurareturn•1h ago
You most likely are.

Or maybe the LLM just hasn't been trained enough on the language you're using.

visarga•1h ago
I went through it twice, once for classical ML engineering work (used to build bespoke models, not just prompt), and second time for coding.
anonymous908213•1h ago
Edit: The comment I am replying to was rewritten completely, and originally asserted that the quality of LLMs was now undeniable.

"Undeniably"? I will deny that they are good. I try to use LLMs on a near-daily basis and find them unbearably frustrating to use. They cannot even adequately complete instructions like "following the pattern of A, B, and C in the existing code, create X Y and Z functions but with one change" reliably. This is a given; the work I do is outside the training dataset in any meaningful sense, so their next-token-prediction is statistically going to lean away from predicting whatever I'm doing, even if RL training to "follow instructions" is marginally effective.

The conclusion I've come to is that the 10x hypebots fall into two categories. The first is hobbyists who could barely code at all, and now they are 10x productive at producing very bad software that is not worth sharing with the world. The other category is people who use LLMs to launder code from the training dataset to wash it free of its licenses. If your use case is reproducing code it has already been trained on, it can do that quickly.

These claims of "holding it wrong", one of which I already see in the replies, are fundamentally preposterous. This is the revolution that is democraticising software engineering for anyone who can write natural language, yet competent software engineers are using it wrong? No, the reality is that it simply doesn't have that level of utility. If it did, we would be seeing an influx of excellent software worthy of widespread usage that would replace much of the existing flawed software in the world, if not pushing new boundaries altogether. Instead we get flooded with ShowHNs fit for the pig trough.

That's not to say LLMs have zero utility. They can obviously generate a proof-of-concept quickly, and if the task is trivial enough, save a couple of minutes writing a throwaway script that you actually use day-to-day. I find them to be somewhat useful for retrieving information from documentation, although some of this gain is offset by the time wasted from hallucinated APIs. But I would estimate the productivity gains at 5%, maybe. That gain is hardly worth the accelerating AI psychosis gripping society and flooding the internet with garbage that drowns out the worthwhile content.

Addendum: Now that your post has been rewritten to assert that no, LLMs aren't there yet, but surely in the next 6 months, this time for sure it'll be AGI... welcome to the bubble. I've been told that AGI is coming in a couple of months every month for the past two years. We are no closer to it than we were two years ago. The improvements have been modest and there are clearly diminishing returns on investing in exponential scaling, not to mention that more scaling can never solve the fundamental architectural flaws of LLMs.

aurareturn•1h ago
What programming language and what LLM model did you use?
anonymous908213•1h ago
I write code in C, C#, Typescript, and Python for various use cases, as well as my own language in development. I have used every frontier model, including Opus 4.5 that people won't stop proclaiming is a paradigm shift. They have continuously disappointed me at every turn.
aurareturn•1h ago
Got a concrete example of where Opus 4.5 disappointed you?

Maybe a Github repo for me to try?

anonymous908213•1h ago
As you can see from the username, this is an anonymous account where I speak freely without concern of it being associated with me or my projects in perpetuity. I will extend the same question to you, though, as I have offered to every person I engage with on this subject on HN: what is your 10x project? Have you produced any software that other people would consider using[1]? I have yet to be shown a single project that is primarily LLM-developed which would indicate to me that LLMs are changing the future of software engineering.

[1] AI psychosis projects like Gas Town, which are only used by other psychosis victims to create more psychosis projects and which altogether in the end never result in a real project that solves a real-world problem for real people do not count.

NitpickLawyer•1h ago
The problems with your take (and others like it) are manyfold.

First, there are some "smells" that I noticed. You say that LLMs hallucinate APIs and in another comment (brief skim of your history to make sure it's worth replying) you say something about chatting with an LLM. If you're "using" them in a chat interface, that's already 1+year old tech, and you should know that noone here talks about that. We're talking about LLM assisted coding using harnesses that make it possible and worth your time. Another smell is that you assert that LLMs only work for languages that are popular. While it's true they work best in those cases, as of ~1 y ago, it's also true that they can work even on invented languages. So I take every "i work in this very niche field" with a grain of salt nowadays.

Second, the overall problem with "it doesn't work for me" is that it's an useless signal. Both in general and in particular. If I see a "positive post", I can immediately test it. If it works, great, I can include it in my toolbox. If it doesn't work, I can skip it. But with posts like yours, I can't do anything. You haven't provided any details, and even if you did, it would still be so dependant on your particular problem, with language, env, etc. that it would make the signal very weak for anyone else that doesn't have your particular problem.

I am actually curious, if you can share, what's your setup. And perhaps an example of things you couldn't do. Perhaps we can help.

The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

Having said that, here's my take: With small provisions made for extreme niche fields (so extreme that it would place you in 0.0x% of coders, making the overall point moot anyway) I think people reporting 0 success are either wrong or using it wrong. It's impossible for me to believe that everything that I can achieve is so out of phase with whatever you are trying to achieve as to you getting literally 0 success. And I'm sick and tired of hearing this "oh it works for trivial tasks". No. It works reliably and unattended mostly for trivial tasks, but it can also work in very advanced niches. And there's plenty of public examples already for this - things like kernel optimisation, tensor libraries, cuda code, and so on. These are not "amateur" topics by any stretch of the word. And no, juniors can't one shot this either. I say this after 25+years doing this: there are plenty of times where I'm dumbstruck by something working first try. And I can't believe I'm the only one.

anonymous908213•34m ago
I use the chat interface by default because it is the only way I have felt that I am gaining any productivity at all. Letting LLMs waste time probing for files and executing their atrocities on my codebase has only resulted in lost time. Not for lack of trying; I have set up Codex and Claude Code environments, multiple times. I have wasted entire days trying to configure the setup and get something that provides value to me, three times last year - once with an early release of CC, once with Codex's release, and once again to retry them with GPT 5.2 and Opus 4.5.Every attempt ended in a complete failure to justify the time invested.

> The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

This very thread is about hype. The post I originally replied to suggests that developers are in stages of grief about LLMs. That we are traversing denial, anger, and depression, before our inevitable acceptance. It is utterly tiring to be subjected to this day in, day out, in every avenue of public discourse about the field. Of course I have grievances with the hype. Of course I don't appreciate being told I'm in denial and that everything has changed. The only thing that has changed is that LLM-generated articles are all over HN and ShowHN is polluted with a very high quantity of very low quality content.

> Second, the overall problem with "it doesn't work for me" is that it's an useless signal.

The signal is not for the true believers. People who have not succumbed to the hype may find value in knowing that they are not alone. If one person can't make use of LLMs, while everyone around them is hyping them up, it may make that person feel like they are being doing something wrong and being left behind. But if people push back against the hype, they will know that they are not alone, and that maybe it isn't actually worth investing entire workdays into trying to find the magical configuration of .md files that turns Claude Code from 0.5x productivity to 10x productivity.

To be clear, I'm not really in the market for advice on "holding it right". If I find myself being left behind in reality, I will continue giving the tooling another shot until I get it right. I spend most of my life coding, and have so many large projects I wish to bring into the world and not enough time to do them all; I will relentlessly pursue a productivity increase if and when it becomes available. As it is, though, I have seen zero evidence that I am actually being left behind, and am not currently interested in trying again at the present time.

palmotea•57m ago
> I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.

We can only hope! It's about time all those pompous developers embrace the economic rug-pull, and adopt a lifestyle more in line with their true economic value. It's capitalism people, the best system there is. Deal with it and quit whining.

Kapura•1h ago
If the only way to advance my career was to talk into a chatbox that makes shit up and encourages people to kill themselves i would stop using computers to spend my days picking oranges. i guess some people feel differently.
weeznerps•1h ago
Anger stage
visarga•1h ago
People do those 2 bad things too. We did it first, and we did it more. Slop too, we invented slop and SEO.
ChipopLeMoral•1h ago
I just tweeted the exact same thought a few days ago, I guess we're all going through the same journey right now.

When GPT3 was opened to researchers 4-5 years ago, a friend of mine had access and we tried some stuff together; I was blown away that it could translate code it hasn't seen between programming languages, but it seemed to be pretty bad at it at the time. I did not expect coding to be the killer app of LLMs but here we are.

catigula•1h ago
>What I came to realize as I began using these tools more is that I was entirely wrong about feeling like my skills would become useless. They don't replace all the experience and knowledge I've accumulated in over two decades as a developer, and instead they enhance what I could do.

FYI this is the denial stage.

aurareturn•1h ago
haha, you might be right
julienchastang•1h ago
> Writing code isn't where I bring the most value. Understanding business problems, analyzing trade-offs, and making sure we're building the right things is where I can put all those years to good use. It might sound like an obvious thing, but it took me a while to get to this point.

Reaching this epiphany is a major milestone in the career of an SE even before the days of LLMs. That's basically the crux of it.

aurareturn•1h ago
Based on my 20 years of experience, the vast majority of developers do not possess those skills.

I'd guess that only 10% of them actually do. In order to have those skills, you need good user sense, good business sense, good negotiation skills, good communication skills. These skills align more with the product manager to be frank.

Of course, the best people are still going to be those who have the technical chops and business sense. They'll be amplified more in this era.

happytoexplain•1h ago
The vast majority of developers are not in roles where decisions at that level are being made (except occasionally, on a smaller scope), so their ability in that context is irrelevant. You're describing project leads and department leads.
XenophileJKO•58m ago
Every engineer has this opportunity, whether they use it or not is usually the issue. Almost every decision that you make can make the current and future success of the business more or less likely.

I've said before: "There are no 'staff' projects, only 'staff' execution."

Yossarrian22•48m ago
Is there any non-pro-AI position that couldn’t be construed as being part of a stage of grief?