frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Replacing my OS process scheduler with an LLM

https://github.com/mprajyothreddy/brainkernel
1•ImPrajyoth•2m ago•0 comments

Ask HN: What do you use to manage your coding projects?

1•SunshineTheCat•2m ago•0 comments

LeaseGuard: Raft Leases Done Right

https://emptysqua.re/blog/leaseguard-raft-leader-leases-done-right/
1•msaltz•3m ago•0 comments

Show HN: Term Tier – A TUI tier list maker written in Go

https://github.com/StevanFreeborn/term-tier
1•stevanfreeborn•3m ago•1 comments

Living with obesity: The people who are hard-wired to store fat (2021)

https://www.bbc.com/news/uk-57419041
1•paulpauper•6m ago•1 comments

Show HN: Cck ClaudeCode file change tracking and auto Claude.md

1•takawasi•6m ago•0 comments

Specifying the Kernel ABI (2017)

https://lwn.net/Articles/726021/
1•wonger_•6m ago•0 comments

What is the greatest artwork of the century so far?

https://marginalrevolution.com/marginalrevolution/2025/12/what-is-the-greatest-artwork-of-the-cen...
2•paulpauper•6m ago•0 comments

Show HN: MCP Mesh – one endpoint for all your MCP servers (OSS self-hosted)

https://github.com/decocms/mesh
1•gadr90•7m ago•0 comments

Building Code-Chunk: AST Aware Code Chunking

https://supermemory.ai/blog/building-code-chunk-ast-aware-code-chunking/
1•ashvardanian•7m ago•0 comments

2025 in Review: Jagged Intelligence Becomes a Fault Line

https://www.dbreunig.com/2025/12/29/2025-in-review.html
2•dbreunig•8m ago•0 comments

1Password extension breaks code blocks on all websites

https://twitter.com/saltyaom/status/2005701290870087817
2•nailer•9m ago•1 comments

My shower head is racist [doechii]

https://doechii.substack.com/p/my-shower-head-is-racist
2•randycupertino•10m ago•1 comments

Why Private-Equity Millionaires Love South Dakota

https://www.wsj.com/finance/investing/south-dakota-trusts-state-taxes-0aa26539
2•smurda•12m ago•0 comments

Daily orange juice could be helping your heart

https://theconversation.com/your-daily-orange-juice-could-be-helping-your-heart-270492
3•PaulHoule•16m ago•0 comments

Image Sequence to GIF Converter [Gifify]

https://gifify.himthe.dev/
2•bobsterlobster•16m ago•1 comments

Turn Objections into Conditions

https://holenventures.substack.com/p/turn-objections-into-conditions
3•hholen•18m ago•1 comments

Teach Yourself Programming in Ten Years (1998)

https://norvig.com/21-days.html?
2•chistev•21m ago•0 comments

An Attempt at Defining Consciousness

https://docs.google.com/document/d/1Tmd_3DXbnC2YovDHuMslTs681lN-goSB0NqAv9N3EK0/edit?usp=drivesdk
1•Trenthug•22m ago•1 comments

Why people are mad at Framework

https://sgued.fr/blog/framework-omarchy/
7•Shock9889•23m ago•2 comments

Show HN: Mindwtr – Local-First GTD App (Tauri, React Native, Rust)

1•dongdongbh•25m ago•0 comments

Show HN: NoCall.chat – I built a service that calls businesses for you

https://nocall.chat/
2•mikeavdeev•25m ago•0 comments

YouTuber Ross Creations probed for animal abuse over 'opossum launcher' video

https://www.dexerto.com/youtube/youtuber-ross-creations-under-investigation-for-animal-abuse-over...
3•randycupertino•29m ago•0 comments

European Russophobia and Europe's Rejection of Peace: A Two-Century Failure

https://www.jeffsachs.org/newspaper-articles/gwakaclgfdl3g9fn9lfa32llgtbphc
2•hackandthink•31m ago•0 comments

Ask HN: Any example of successful vibe-coded product?

2•sirnicolaz•31m ago•1 comments

AI coding fails because architecture isn't persistent – I built a fix

1•danamakes•31m ago•2 comments

Building Frontier Open Intelligence Accessible to All

https://reflection.ai/blog/frontier-open-intelligence/
1•walterbell•33m ago•0 comments

Using the GitButler MCP Server

https://blog.gitbutler.com/using-gb-mcp
1•aspleenic•37m ago•0 comments

Are There Fourth Amendment Rights in Google Search Terms?

https://reason.com/volokh/2025/12/16/are-there-fourth-amendment-rights-in-google-search-terms/
1•delichon•39m ago•0 comments

Show HN: Financial calculators with no tracking, no signup, no email gates

https://www.financialaha.com/financial-calculators/
2•stefanneculai•40m ago•0 comments
Open in hackernews

The 70% AI productivity myth: why most companies aren't seeing the gains

https://sderosiaux.substack.com/p/the-70-ai-productivity-myth-why-most
36•chtefi•2h ago

Comments

chiengineer•2h ago
Lets give 99% of the company devices with 16gb of ram or less and force them to use 85% of it for security scans

- corporate

WHY CANT OUR DEVICES RUN TECHNOLOGIES ??????

- also corporate

hn-acct•1h ago
Actually though. We had one device that was over 10 years older without any MDM etc. and it outperformed a new laptop building the same product because of the corporate anti virus crap.
HappySweeney•1h ago
If you don't exclude your build folders from the scan it will slow everything down tremendously.
fancyfredbot•1h ago
The METR study cited here is very interesting.

"In the METR study, developers predicted AI would make them 24% faster before starting. After finishing 19% slower, they still believed they'd been 20% faster."

I hadn't heard of this study before. Seems like it's been mentioned on HN before but not got much traction.

Sharlin•1h ago
Plenty of people have been (too) quick to dismiss that study as not generally applicable because it was about highly experienced OSS devs rather than your average corporation programmer drone.
fancyfredbot•1h ago
That's interesting context for sure, but the fact these were experienced developers makes it all the more surprising that they didn't realise the LLM slowed them down.
Sharlin•24m ago
Measuring programming productivity in general is notoriously difficult, subjectively measuring your own programming productivity is even worse. A magic LoC machine saying brrrrrt gives an overoptimistic sense of getting things done.
_aavaa_•1h ago
The issue I have with the paper is that it seems (based on my skimming) that they did not pick developers who were already versed with AI tooling. So they're comparing (experienced dev working in the way they're comfortable) vs (experienced dev working with new tool for the first time and not having passed the productivity slump from onboarding).
Sharlin•1h ago
Longitudinal studies are definitely needed, but of course at the time the research for this paper was done there weren't any programmers experienced with AI assist out there yet.
simonw•1h ago
I see it brought up almost every week! It's a firm favorite of the "LLMs don't actually help write code" contingent, probably because there are very few other credible studies they can point to in support of their position.

Most people who cite it clearly didn't read as far as the table where METR themselves say:

> We do not provide evidence that:

> 1) AI systems do not currently speed up many or most software developers. Clarification: We do not claim that our developers or repositories represent a majority or plurality of software development work

> 2) AI systems do not speed up individuals or groups in domains other than software development. Clarification: We only study software development

> 3) AI systems in the near future will not speed up developers in our exact setting. Clarification: Progress is difficult to predict, and there has been substantial AI progress over the past five years [3]

> 4) There are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting. Clarification: Cursor does not sample many tokens from LLMs, it may not use optimal prompting/scaffolding, and domain/repository-specific training/finetuning/few-shot learning could yield positive speedup

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

fancyfredbot•39m ago
Weird, you shouldn't really need to list the things your study doesn't prove! I guess they anticipated that the study might be misrepresented and wanted to get ahead of that.

Their study still shows something interesting, and quite surprising. But if you choose to extrapolate from this specific setting and say coding assistants don't work in general then that's not scientific and you need to be careful.

I think the studyshould probably decrease your prior that AI assistants actually speed up development, even if developers using AI tell you otherwise. The fact it feels faster when it is slower is super interesting.

simonw•22m ago
The lesson I took from the study is that developers are terrible at estimating their own productivity based on a new tool.

Being armed with that knowledge is useful when thinking about my own productivity, as I know that there's a risk of me over-estimating the impact of this stuff.

But then I look at https://github.com/simonw which currently lists 530 commits over 46 repositories for the month of December, which is the month I started using Opus 4.5 in Claude Code. That looks pretty credible to me!

pydry•10m ago
The lesson I learned is that agentic coding uses intermittent reinforcement to mimic a slot machine.

It (along with the hundreds of billions in investments hinging on it), explains the legions of people online who passionately defend their "system". Every gambler has a "system" and they usually earnestly believe it is helping them.

Some people even write popular and profitable blogs about playing slots machines where they share their tips and tricks.

fancyfredbot•9m ago
That's certainly an impressive month! However, it's conceivable that you are an outlier (in the best possible way!)

I liked the way they did that study and I would be interested to see an updated version with new tools.

I'm not particularly sceptical myself and my guess is that using Opus 4.5 would probably have produced a different result to the one in the original study.

ezoe•1h ago
If anyone ever wonder why they don't see productivity improvement, they really need to read Mythical Man-Month.

Garage Duo can out-compete corporate because there is less overhead. But Garage Duo can't possibly output the sheer amount of work matching with corporate.

fancyfredbot•1h ago
In my view the reasons why LLMs may be less effective in a corporate environment is quite different from the human factors in mythical man month.

I think that the reason LLMs don't work as well in a corporate environment with large codebases and complex business logic, but do work well in greenfield projects, is linked to the amount of context the agents can maintain.

Many types of corporate overhead can be reduced using an LLM. Especially following "well meant but inefficient" process around JIRA tickets, testing evidence, code review, documentation etc.

pigpop•1h ago
I've found that something very similar to those "inefficient" processes works incredibly well when applied to LLMs. All of those processes are designed to allow for seamless handoff to different people who may not be familiar with the project or code which is exactly what an LLM behaves like when you clear its context.
nradov•1h ago
The limited LLM context windows could be an argument in favor of a microservices architecture with each service or library in its own repository.
kamaal•53m ago
>>there is less overhead.

There have been methods to reduce overhead available over the history of our industry. Unfortunately almost all the times it involves using productive tools that would in some way reduce the head counts required to do large projects.

The way this works is you eventually have to work with languages like Lisp, Perl, Prolog, and then some one comes up with a theory that programming must be optimised for the mostly beginners and power tooling must be avoided. Now you are forced to use verbose languages, writing, maintaining and troubleshooting take a lot of people.

The thing is this time around, we have a way to make code by asking an AI tool questions. So you get the same effect but now with languages like JS and Python.

jennyholzer3•24m ago
the productivity improvement is the Big Lie
mgrat•1h ago
I've worked at a number of non-tech companies the past few years. They bought every SaaS product, Palantir, Databricks, multi-cloud, their dev teams adopted every pattern popularized by big tech and the results were always mixed. Any gains were wiped out by being buried under technical debt. They had all the data catalogs & 'ontologies' with none of the governance to go make it work. Turns out that benefiting from all this tech requires you to re-organize and change your culture. For a lot of companies, they're just not going to see big gains from AI or tech in general at this point.
mashlol•1h ago
AI almost always reduces the time from "I need to implement this feature" to "there is some code that implements this feature".

However in my experience, the issue with AI is the potential hidden cost down the road. We either have to:

1. Code review the AI generated code line by line to ensure it's exactly what you'd have produced yourself when it is generated or

2. Pay an unknown amount of tech tebt down the road when it inevitably wasn't what you'd have done yourself and it isn't extensible, scalable, well written code.

brightball•1h ago
Exactly. Optimizations in one area will simply move the bottleneck so in order to truly recognize gains you have to optimize the entire software pipeline.
nradov•1h ago
Exactly right. It turns out that writing code is hardly ever the real bottleneck. People should spend some time learning the basics of queueing theory.

http://lpd2.com/

jimbo808•1h ago
RE 2: It's not that far down the road either. Laziliy reviewed or unreviewed LLM code rapidly turns your codebase into an absolute mess that LLMs can't maintain either. Very quickly you find yourself with lots of redundant code and duplicated logic, random unused code that's called by other unused code that gets called inside a branch that only tests will trigger, stuff like that. Eventually LLMs start fixing the code that isn't used and then confidently report that they solved the problem, filling up a context window with redundant nonsense every prompt, so they can't get anywhere. Yolo AI coding is like the payday loan of tech debt.
DougN7•1h ago
This can happen sooner than you think too. I asked for what I thought was a simple feature and the AI wrote and rewrote a number of times trying to get it right, and eventually (not making this up) it told me the file was corrupt and could I please restore it from backup. This happened within about 20-30 minutes of asking for the change.
jennyholzer3•26m ago
This is why I say LLMs are for idiots
linsomniac•1h ago
>Code review the AI generated code line by line

Have you considered having AI code review the AI code before giving them off to a human? I've been experimenting with having claude work on some code and commit it, and then having codex review the changes in the most recent git commit, then eyeballing the recommendations and either having codex work the changes, or giving them back to claude. That has seemed to be quite effective so far.

Maybe it's turtles all the way down?

nospice•1h ago
Another day, another evidently AI-written article about AI on the front page of HN...
hackeman300•1h ago
Yup, closed as soon as I saw the classic "it's not x, it's y" pattern.
bulletsvshumans•1h ago
I think coding agents require fundamentally different development practices in order to produce efficiency improvements. And just like any new tool, they benefit from wisdom in how they are applied, which we are just starting to develop as an industry. I expect that over time we will grow to understand and also expand the circumstances in which they are a net benefit, while also appreciating where they are a hindrance, leading to an overall efficiency increase as we avoid the productivity hit resulting from their misapplication.
orwin•1h ago
If we take out most of frontend work, and the easy backend/Ops tasks where writing the code/config is 99% of the work, i think my overall productivity with the latest gen (basically Opus 4.5) improve by 15-20%. I also am _very_ sure that with the previous generation (Sonnet 4, sonnet 4.5, Codex 5.1), my team overall velocity decreased, even taking into account the frontend and the "easy" tasks. The amount of production bug we had to deal with this year is crazy. To much code is generated, and me and the other senior on my team just can't carefully review everything, we have to trust sometime (especially data structures).

The worse part is reading a PR, and catching a reintroduced bug that was fixed a few commit ago. The first time i almost lost my cool at work and said a negative thing to a coworker.

This would be my advice to juniors (and i mean basically: devs who don't yet understand the underlying business/architecture): use the AI to explain how stuff work, generate basic functions maybe, but write code logic/algorithm yourself until you are sure you understand what you're doing and why. Work and reflect on the data structures by yourself, even if generated by the AI, and ask for alternatives. Always ask for alternatives, it helps understanding. You might not see huge productivity gains from AI, but you will improve first, and then productivity will improve very fast, from your brain first, then from AI.

mapontosevenths•1h ago
Just to add to your advice to juniors working with AI:

* Force the AI to write tests for everything. Ensure those tests function. Writing boring unit tests used to be arduous. Now the machine can do it for you. There's no excuse for a code regression making it's way into a PR because you actually ran the tests before you did the commit, right? Right? RIGHT?

* Force the AI to write documentation and properly comment code, then (this is the tricky part) you actually read what it said it was doing and ensure that this is what you wanted it to do before you commit.

Just doing these two things will vastly improve the quality and prevent most of the dumb regressions that are common with AI generated code. Even if you're too busy/lazy to read every line of code the AI outputs just ensuring that it passes the tests and that the comments/docs describe the behavior you asked for will get you 90% of the way there.

syspec•46m ago
Sometimes the AI is all too good at writing tests.

I agree with the idea, I do it too, but you need to make sure the test don't just validate the incorrect behavior or that the code is not updated to pass the test in a way that actually "misses the point".

I've had this happen to me on one or two tests every time

mapontosevenths•40m ago
I agree 100%.

For some reason Gemini seems to be worse at it than Claude lately. Since mostly moving to 3 I've had it go back and change the tests rather than fixing the bug on what seems to be a regular basis. It's like it's gotten smart enough to "cheat" more. You really do still have to pay attention that the tests are valid.

aisisiiaai•37m ago
Even more important, those tests need to be useful. Often unit tests are simply testing the code works as written which is generally doing more harm than good.

To give some further advice to juniors: if somebody is telling you writing unit tests is boring, they haven’t learned how to write good tests. There appears to be a large intersection between devs who think testing is a dull task and devs who see a self proclaimed speed up from AI. I don’t think this is a coincidence.

Writing useful tests is just as important as writing app code, and should be reviewed with equal scrutiny.

AnimalMuppet•29m ago
And, you actually wrote the regression test when you fixed the bug, right? Right?
linsomniac•1h ago
>The AI fluency tax. This isn't free to learn.

In programming we've often embraced spending time to learn new tools. The AI tools are just another set of tools, and they're rapidly changing as well.

I've been experimenting seriously with the tools for ~3 years now, and I'm still learning a lot about their use. Just this past weekend I started using a whole new workflow, and it one-shotted building a PWA that implements a fully-featured calorie tracking app (with social features, pre-populating foods from online databases, weight tracking and graphing, avatars, it's on par with many I've used in the past that cost $30+/year).

Someone just starting out at chat.openai.com isn't going to get close to this. You absolutely have to spend time learning the tooling for it to be at all effective.

turlockmike•1h ago
When producing code is cheap, you can spend more time on verification testing.

Force the LLM to follow a workflow, have it do TDD, use task lists, have it write implementation plans.

LLMs are great coders, but subpar developers, help them be a good developer and you will see massive returns.

pydry•17m ago
Coz I have always done coding this way with humans I started out using LLMs to do simple bits of refactoring where tests could be used to validate that the output still worked.

I did not get the impression from this that LLMs were great coders. They would frequently miss stuff, make mistakes and often just ignore the instructions i gave them.

Sometimes they would get it right but not enough. The agentic coding loop still slowed me down overall. Perhaps if i were more junior it would have been a net boost.

ukuina•1h ago
This article simply reinforces existing (and outdated) biases.

Complex legacy refactoring + Systems with poor documentation or unusual patterns + Architectural decisions requiring deep context: These go hand in hand. LLMs are really good at pulling these older systems apart, documenting, then refactoring them, tests and all. Exacerbated by poor documentation of domain expectations. Get your experts in a room weekly and record their rambling ideas and history of the system. Synthesize with an LLM against existing codebase. You'll get to 80% system comprehension in a matter of months.

Novel problem-solving with high stakes: This is the true bottleneck, and where engineers can shine. Risk assessment and recombination of ideas, with rapid prototyping.

everdrive•1h ago
A lot of the time, AI allows you to exercise basic competence at tasks for which you'd otherwise be incompetent. I think this is why it feels so powerful. You can jump into more or less any task below a certain level of complexity. (eg: you're not going to write an operating system with an LLM but you can set up and configure Wordpress if you'd never done it before.)

I think for users this _feels_ incredibly powerful, however this also has its own pitfalls: Any topic which you're incompetent at is one which you're also unequipped to successfully review.

I think there are some other productivity pitfalls for LLMs:

- Employees use it to give their boss emails / summaries / etc in the language and style their boss wants. This makes their boss happy, but doesn't actually modify productivity whatsoever since the exercise was a waste of time in the first place.

- Employees send more emails, and summarize more emails. They look busier, but they're not actually writing the emails or really reading them. The email volume has increased, however the emails themselves were probably a waste of time in the first place.

- There is more work to review all around and much of it is of poor quality.

I think these issues play a smaller part than some of the general issues raised (eg: poor quality code / lack of code reviews / etc.) but are still worth noting.

AnimalMuppet•30m ago
It's like Excel: It's really powerful to enable someone who actually knows what needs done to build a little tool that does that thing. It often doesn't have to be professional-quality, let alone perfect. It just has to be better than doing the same thing manually. There are massive productivity gains to be had there... for people with that kind of problem.

This is completely orthogonal to productivity gains for full time professional developers.

jennyholzer3•24m ago
"There is more work to review all around and much of it is of poor quality."

This is the average software developer's experience of LLMs

mattas•1h ago
In my experience, it’s basically impossible to accurately measure productivity of knowledge work. Whenever I see a stat associated to productivity gain/loss I get skeptical.

If you go the pure subjective route, I’ve found that people conflate “speed” or “productivity” with “ease.”

josefritzishere•47m ago
I think AI would have better general acceptance if we stopped mythologizing it's utility. It's so wildly over exaggerated it can't ever live up to the hype. If AI can't adapt to a reality-based universe, the bubble is going to burst all the sooner.
aisisiiaai•34m ago
A key point missing from a lot of the AI debate is how much work is useless. From as simple as a feature that’s never turned on to a more extreme version of a job that doesn’t need to exist.

We have a lot of useless work being done, and AI is absolutely going to be a 10x speed up for this kind of work.

zihotki•31m ago
Sounds like AI slopish article. A whole section about "Why most enterprises don't" with many words but no actual data or analysis. Just assumptions based on orthogonal report.

AI won't give you much productivity if the problem you're challenged with is the human problem. That could happen both to startups and enterprises.