frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

GenAI Processors: Build powerful and flexible Gemini applications

https://developers.googleblog.com/en/genai-processors/
18•tzury•25m ago•0 comments

Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"

https://simonwillison.net/2025/Jul/11/grok-musk/
232•simonw•6h ago•116 comments

Show HN: Pangolin – Open source alternative to Cloudflare Tunnels

https://github.com/fosrl/pangolin
187•miloschwartz•9h ago•24 comments

Postgres LISTEN/NOTIFY does not scale

https://www.recall.ai/blog/postgres-listen-notify-does-not-scale
400•davidgu•3d ago•152 comments

Batch Mode in the Gemini API: Process More for Less

https://developers.googleblog.com/en/scale-your-ai-workloads-batch-mode-gemini-api/
86•xnx•3d ago•23 comments

OpenFront: Realtime Risk-like multiplayer game in the browser

https://openfront.io/
3•thombles•29m ago•0 comments

The ChompSaw: A Benchtop Power Tool That's Safe for Kids to Use

https://www.core77.com/posts/137602/The-ChompSaw-A-Benchtop-Power-Tool-Thats-Safe-for-Kids-to-Use
150•surprisetalk•3d ago•88 comments

What is Realtalk’s relationship to AI? (2024)

https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relationship_to_AI
248•prathyvsh•15h ago•83 comments

LLM Inference Handbook

https://bentoml.com/llm/
14•djhu9•4h ago•0 comments

Show HN: Open source alternative to Perplexity Comet

https://www.browseros.com/
205•felarof•13h ago•68 comments

Series of posts on HTTP status codes (2018)

https://evertpot.com/http/
28•antonalekseev•1d ago•6 comments

FOKS: Federated Open Key Service

https://foks.pub/
205•ubj•18h ago•43 comments

Graphical Linear Algebra

https://graphicallinearalgebra.net/
228•hyperbrainer•14h ago•16 comments

Flix – A powerful effect-oriented programming language

https://flix.dev/
252•freilanzer•16h ago•101 comments

Australia is introducing age checks for search engines like Google

https://www.abc.net.au/news/2025-07-11/age-verification-search-engines/105516256
88•ahonhn•3h ago•65 comments

Show HN: Interactive pinout for the Raspberry Pi Pico 2

https://pico2.pinout.xyz
26•gadgetoid•3d ago•2 comments

Apple-1 Computer, handmade by Jobs and Woz [video]

https://www.youtube.com/watch?v=XdBKuBhdZwg
42•guiambros•2d ago•11 comments

Show HN: I built a playground to showcase what Flux Kontext is good at

https://fluxkontextlab.com
45•Zephyrion•1d ago•13 comments

Underwater turbine spinning for 6 years off Scotland's coast is a breakthrough

https://apnews.com/article/tidal-energy-turbine-marine-meygen-scotland-ffff3a7082205b33b612a1417e1ec6d6
148•djoldman•16h ago•138 comments

Red Hat Technical Writing Style Guide

https://stylepedia.net/style/
192•jumpocelot•15h ago•82 comments

Foundations of Search: A Perspective from Computer Science (2012) [pdf]

https://staffwww.dcs.shef.ac.uk/people/J.Marshall/publications/SFR09_16%20Marshall%20&%20Neumann_PP.pdf
19•mooreds•3d ago•0 comments

Measuring the impact of AI on experienced open-source developer productivity

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
582•dheerajvs•14h ago•378 comments

Orwell Diaries 1938-1942

https://orwelldiaries.wordpress.com/page/2/
102•bookofjoe•12h ago•59 comments

eBPF: Connecting with Container Runtimes

https://h0x0er.github.io/blog/2025/06/29/ebpf-connecting-with-container-runtimes/
49•forxtrot•11h ago•5 comments

AI coding tools can reduce productivity

https://secondthoughts.ai/p/ai-coding-slowdown
124•gk1•7h ago•94 comments

Launch HN: Leaping (YC W25) – Self-Improving Voice AI

59•akyshnik•13h ago•28 comments

Analyzing database trends through 1.8M Hacker News headlines

https://camelai.com/blog/hn-database-hype/
140•vercantez•3d ago•68 comments

Diffsitter – A Tree-sitter based AST difftool to get meaningful semantic diffs

https://github.com/afnanenayet/diffsitter
114•mihau•18h ago•28 comments

Matt Trout has died

https://www.shadowcat.co.uk/2025/07/09/ripples-they-cause-in-the-world/
184•todsacerdoti•23h ago•52 comments

Grok 4

https://simonwillison.net/2025/Jul/10/grok-4/
269•coloneltcb•11h ago•184 comments
Open in hackernews

AI coding tools can reduce productivity

https://secondthoughts.ai/p/ai-coding-slowdown
124•gk1•7h ago

Comments

latenightcoding•6h ago
LLMs make me 10-20x more productive in frontend work which I barely do. But when it comes to low-level stuff (C/C++) I personally don't find it too useful. it just replaces my need to search stackoverflow.

edit: should have mentioned the low-level stuff I work on is mature code and a lot of times novel.

justinko•6h ago
Same. It’s amazing for frontend.
relaxing•6h ago
Is this because they had the entire web to train on, code + output and semantics in every page?
Falimonda•5h ago
It's moreso that a backend developer can now throw together a frontend and vice-versa without relying on a team member or needing to set aside time to internalize all the necessary concepts to just make that other part of the system work. I imagine even a full-stack developer will find benefits.
hluska•5h ago
This has nothing to do with what they asked.
owebmaster•4h ago
So we are all back to be webmasters :)
hluska•5h ago
I’m not sure how this was extended and refined but there are sure a lot of signs of open source code being used heavily (at least early on). It would make sense to test model fit with the web at large.
famahar•5h ago
It's astonishing. A bit scary actually. Can easily see the role of front-end slowly morphing into a single person team managing a set of AI tools. More of an architecture role.
sottol•6h ago
Interesting, I find the exact opposite. Although to a much lesser extent (maybe 50% boost).

I ended shoehorned into backend dev in Ruby/Py/Java and don't find it improves my day to day a lot.

Specifically in C, it can bang out complicated but mostly common data-structures without fault where I would surely do one-off errors. I guess since I do C for hobby I tend to solve more interesting and complicated problems like generating a whole array of dynamic C-dispatchers from a UI-library spec in JSON that allows parsing and rendering a UI specified in YAML. Gemini pro even spat out a YAML-dialect parser after a few attempts/fixes.

Maybe it's a function of familiarity and problems you end using the AI for.

freeone3000•3h ago
As in, it seems to be best at problems that you’re unfamiliar with in domains where you have trouble judging the quality?
moron4hire•6h ago
This feels like a parallel to the Gell-Mann amnesia effect.

Recently, my company has been investigating AI tools for coding. I know this sounds very late to the game, but we're a DoD consultancy and one not traditional associated with software development. So, for most of the people in the company, they are very impressed with the AI's output.

I, on the other hand, am a fairly recent addition to the company. I was specifically hired to be a "wildcard" in their usual operations. Which is too say, maybe 10 of us in a company of 3000 know what we're doing regarding software (but that's being generous because I don't really have visibility into half of the company). So, that means 99.7% of the company doesn't have the experience necessary to tell what good software development looks like.

The stuff the people using the AI are putting out is... better than what the MilOps analysts pressed into writing Python-scripts-with-delusions-of-grandeur were doing before, but by no means what I'd call quality software. I have pretty deep experience in both back end and front end. It's a step above "code written by smart people completely inexperienced in writing software that has to be maintained over a lifetime", but many steps below, "software that can successfully be maintained over a lifetime".

IX-103•5h ago
Well, that's what you'd expect from an LLM. They're not designed to give you the best solution. They're designed to give you the most likely solution. Which means that the results would be expected to be average, as "above average" solutions are unlikely by definition.

You can tweak the prompt a bit to skew the probability distribution with careful prompting (LLMs that are told to claim to be math PHDs are better at math problems, for instance), but in the end all of those weights in the model are spent to encode the most probable outputs.

So, it will be interesting to see how this plays out. If the average person using AI is able to produce above average code, then we could end up in a virtuous cycle where AI continuously improves with human help. On the other hand, if this just allows more low quality code to be written then the opposite happens and AI becomes more and more useless.

leptons•1h ago
I have no doubt which way it is going to go.
jack_h•3h ago
Before the industrial revolution a cabinetmaker would spend a significant amount of time advancing from apprentice to journeyman to master using only hand tools. Now master cabinetmakers that only use hand tools are exceedingly rare, most furniture is made with power tools and a related but largely different skillset.

When it comes to software the entire reason maintainability is a goal is because writing and improving software is incredibly time consuming and requires a lot of skill. It requires so much skill and time that during my decades in industry I rarely found code I would consider quality. Furthermore the output from AI tools currently may have various drawbacks, but this technology is going to keep improving year over year for the foreseeable future.

dchftcs•1h ago
Maintainable software is also more maintainable by AI. The required standards may be a bit different, for example there may be less emphasis on white space styling, but, for example, complexity in the form of subtle connections between different parts of a system is a burden for both humans and AI. AI isn't magic, it still has to reason, it fails on complexity beyond its ability to reason, and maintainable code is one that is easier to reason with.
kannanvijayan•6h ago
I've been hacking on some somewhat systemsy rust code, and I've used LLMs from a while back (early co-pilot about a year ago) on a bunch of C++ systems code.

In both of these cases, I found that just the smart auto-complete is a massive time-saver. In fact, it's more valuable to me than the interactive or agentic features.

Here's a snippet of some code that's in one of my recent buffers:

    // The instruction should be skipped if all of its named
    // outputs have been coalesced away.
    if ! self.should_keep_instr(instr) {
      return;
    }

    // Non-dropped should have a choice.
    let instr_choice =
      choices.maybe_instr_choice(instr_ref)
        .expect("No choice for instruction");
    self.pick_map.set_instr_choice(
      instr_ref,
      instr_choice.clone(),
    );

    // Incref all named def inputs to the PIR choice.
    instr_choice.visit_input_defs(|input_def| {
      self.def_incref(input_def);
    });

    // Decref all named def inputs to the SIR instr.
    instr.visit_inputs(
      |input_def| self.def_decref(input_def, sir_graph)
    );
The actual code _I_ wrote were the comments. The savings in not having to type out the syntax is pretty big. About 80% of the time in manual coding would have been that. Little typos, little adjustments to get the formatting right.

The other nice benefit is that I don't have to trust the LLM. I can evaluate each snippet right there and typically the machine does a good job of picking out syntactic style and semantics from the rest of the codebase and file and applying it to the completion.

The snippet, if it's not obvious, is from a bit of compiler backend code I'm working on. I would never have even _attempted_ to write a compiler backend in my spare time without this assistance.

For experienced devs, autocomplete is good enough for massive efficiency gains in dev speed.

I still haven't warmed to the agentic interfaces because I inherently don't trust the LLMs to produce correct code reliably, so I always end up reviewing it, and reviewing greenfield code is often more work than just writing it (esp now that autocomplete is so much more useful at making that writing faster).

sgc•5h ago
What exact tool are you using for your smart auto-complete?
kannanvijayan•4h ago
Whatever copilot defaults to doing on vscode these days. I didn't configure it very much - just did the common path setup to get it working.
cguess•5h ago
This is exactly my experience as well. I've had agents write a bit of backend code, always small parts. I'm lucky enough to be experienced enough with code I didn't write to be able to quickly debug it when it fails (and it always fails from the first run). Like using AI to write a report, it's good for outlines, but the details are always seemingly random as far as quality.

For frontend though? The stuff I really don't specialize in (despite some of my first html beginning on FrontPage 1997 back in 1997), it's a lifesaver. Just gotta be careful with prompts since so many front end frameworks are basically backend code at this point.

AstroBen•5h ago
This is good if front end is something you just need to get through. It's terrible if your work is moving to involve a lot of frontend - you'll never pick up the skills yourself
sysmax•4h ago
It works with low-level C/C++ just fine as long as you rigorously include all relevant definitions in the context window, provide non-obvious context (like the lifecycle of some various objects) and keep your prompts focused.

Things like "apply this known algorithm to that project-specific data structure" work really well and save plenty of time. Things that require a gut feeling for how things are organized in memory don't work unless you are willing to babysit the model.

freetime2•6h ago
Here is the the methodology of the study:

> To directly measure the real-world impact of AI tools on software development, we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue. When AI is allowed, developers can use any tools they choose (primarily Cursor Pro with Claude 3.5/3.7 Sonnet—frontier models at the time of the study); when disallowed, they work without generative AI assistance. Developers complete these tasks (which average two hours each) while recording their screens, then self-report the total implementation time they needed. We pay developers $150/hr as compensation for their participation in the study.

So it's a small sample size of 16 developers. And it sounds like different tasks were (randomly) assigned to the no-AI and with-AI groups - so the control group doesn't have the same tasks as the experimental group. I think this could lead to some pretty noisy data.

Interestingly - small sample size isn't in the list of objections that the auther includes under "Addressing Every Objection You Thought Of, And Some You Didn’t".

I do think it's an interesting study. But would want to see if the results could be reproduced before reading into it too much.

Tainnor•6h ago
The sample size isn't 16 developers, it's 246 issues.
freetime2•5h ago
So agree with that - but on the other hand surely the number of developers matters here? For example, if instead of 16 developers the study consisted of a single developer completing all 246 tasks with or without AI, and comparing the observed times to complete, I think most people would question the reproducibility and relevancy of the study?
hackable_sand•4h ago
Okay, so why not 246,000 issues?
shoo•2h ago
If you read through the methodology, including how they paid the participants $150 / hr, for 20-40 hours work per participant, you can probably hazard a guess why they didn't scale up the size of the study by 1000x.
jack_pp•50m ago
I think the productivity gains most people rave about are stuff like, I wanted to do X which isn't hard if you are experienced with library Y and library Y is pretty popular and the LLM did it perfectly first try!

I think that's where you get 10-20x. When you're working on niche stuff it's either not gonna work or work poorly.

For example right now I need to figure out why an ffmpeg filter doesn't do X thing smoothly, even though the C code is tiny for the filter and it's self contained.. Gemini refuses to add comments to the code. It just apologizes for not being able to add comments to 150 lines of code lol.

However for building an ffmpeg pipeline in python I was dumbfounded how fast I was prototyping stuff and building fairly complex filter chains which if I had to do by hand just by reading the docs it would've taken me a whole lot more time, effort and frustration but was a joy to figure out with Gemini.

So going back to the study, IMO it's flawed because by definition working on new features for open source projects wouldn't be the bread and butter of LLMs however most people aren't working on stuff like this, they're rewriting the same code that 10000 other people have written but with their own tiny little twist or whatever.

budududuroiu•6h ago
I was surprised at how much better v0 was these days. I remember it yielding clunky UIs initially.

I thought it was the model, but then I realised, v0 is carried by the shadcn UI library, not the intelligence of the model

JKCalhoun•5h ago
As others probably have experienced, I can only add that I am doing coding now I would have kicked down the road if I did not have LLM assistance.

Example: using LeafletJS — not hard, but I didn't want to have to search all over to figure out how to use it.

Example: other web page development requiring dropping image files, complicated scrolling, split-views, etc.

In short, there are projects I have put off in the past but eagerly begin now that LLMs are there to guide me. It's difficult to compare times and productivity in cases like that.

georgemcbay•1h ago
This is pretty similar to my own experience using LLMs as a tool.

When I'm working with platforms/languages/frameworks I am already deeply familiar with I don't think they save me much time at all. When I've tried to use them in this context they seem to save me a bunch of time in some situations, but also cost me a bunch of time in others resulting in basically a wash as far as time saved goes.

And for me a wash isn't worth the long-term cost of losing touch with the code by not being the one to have crafted it.

But when it comes to environments I'm not intimately familiar with they can provide a very easy on-ramp that is a much more pleasant experience than trying to figure things out through often iffy technical documentation or code samples.

tomcam•5h ago
What bothers me more than any of this particular discussion is that we seem to be incapable of determining programmer productivity in a meaningful way since my debut as a programmer 40 years ago.
journal•4h ago
what about the $ you make? isn't that an indicator? you've probably made more than me, so you are more successful while both of us might be doing the same thing.
__MatrixMan__•4h ago
I don't think there's much of a correlation there.
hammyhavoc•4h ago
Productivity has zero to do with salary. Case in point: FOSS.

Some of the most productive devs don't get paid by the big corps who make use of their open source projects, hence the constant urging of corps and people to sponsor projects they make money via.

Ancalagon•3h ago
In a vacuum I don’t believe pay alone is a very good indicator. What might be a better one is if someone has a history across their career of delivering working products to spec, doing this across companies and with increasing responsibility. This of course can only be determined after the fact.
StefanBatory•1h ago
Is DB2 Admin more productive than Java dev on the same seniority?

What about countries? In my Poland $25k would be an amazing salary for a senior while in USA fresh grads can earn $80k. Are they more productive?

... at the same time, given same seniority, job and location - I'd be willing to say it wouldn't be a bad heuristic.

ido•1h ago
It doesn't undermine your point, but if you mean gross yearly wage $25k is not an amazing salary for senior software developers in Poland (I guess it depends where in Poland).
tomcam•4m ago
Salary is an indirect and partially useful metric, but one could argue that your ability to self-promote matters more, at least in the USA. I worked at Microsoft and saw that some of the people who made fat stacks of cash, just happened to be at the right place in the right time, or promoted things that looked good, but we’re not good for the company itself.

I made great money running my own businesses, but the vast majority of the programming was by people I hired. I’m a decent talent, but that gave me the ability to hire better ones than me.

jaredklewis•2h ago
I’m confused as to why anyone would think this would be possible to determine.

Like can we determine the productivity of doctors, lawyers, journalists, or pastry chefs?

What job out there is so simple that we can meaningfully measure all the positive and negative effects of the worker as well as account for different conditions between workers.

I could probably get behind the idea that you could measure productivity for professional poker players (given a long enough evaluation period). Hard to think of much else.

__loam•2h ago
Won't stop MBAs from trying though.
tomcam•8m ago
Duly upvoted! I tend to agree. Yet the shibboleth of productivity haunts us still.
hluska•5h ago
I’ve been around tech for a long time. At this point, I’ve lost count of how many hype cycles I’ve seen hit the “hold on, everything sucks” stage. Generative AI is seemingly at the hold on, everything sucks stage and it’s getting repetitive.
CaptainFever•4h ago
Trough of Disillusionment (followed by the Slope of Enlightenment and Plateau of Productivity): https://en.wikipedia.org/wiki/Gartner_hype_cycle
bluefirebrand•1h ago
My bold prediction is that the Trough of Disillusionment for LLMs is going to be a very long stretch
softwaredoug•5h ago
What if this is true? And then we as a developer community are focused on the wrong thing to increase productivity?

Like what if by focusing on LLMs for productivity we just reinforce old-bad habits, and get into a local maxima... And even worse, what if being stuck with current so-so patterns, languages, etc means we don't innovate in language design, tooling, or other areas that might actually be productivity wins?

journal•4h ago
imagine having interstate highways built in one night you wake up and you have all these highways and roads and everyone is confused what they are and how to use them. using llm is the opposite of boiling frogs because you're not the leader writing, you're just suggesting... i just realized i might not know what im talking about.
__MatrixMan__•4h ago
We were stuck near local maxima since before LLM's came on the scene. I figure the same concentration of innovators are gonna innovate, now LLM assisted, and the same concentration of best-practice folk are gonna best-practice--now LLM assisted. Local maxima might get sticker, but greener pastures will be found more quickly than ever.

I expect it'll balance.

raggi•4h ago
They averaged producing 47% more code on the AI tasks, but took only 20% more time. The report here biases over these considerations, but I’m left wondering: was the extra code superfluous or did this produce better structure / managed debt better? If that extra 47% of code translates to lower debt and more consistent throughput over the long term, I might take it, given how crushed projects get from debt. Anyway, it’s all hyperbole because there are massive statistical differences in the outcomes but no measures as to what they mean, but I’m sure they have meaning. That meaning matters a ton.
gpm•4h ago
Honestly my experience from using AI to code (primarily claude sonnet) is that that "extra 47%" is probably itself mostly tech debt. Places where the AI repeated itself instead of using a loop. Places where the AI wrote tests that don't actually test anything. Places where the AI failed to produce a simple abstraction and instead just kept doing the same thing by hand. Etc.

AI isn't very good at being concise, in my experience. To the point of producing worse code. Which is a strange change from humans who might just have a habit of being too concise, but not by the same degree.

raggi•1h ago
Your response implies the ai produced code was landed without review. That’s a possible outcome but I would hope it’s unlikely to account for the whole group at this scale. We’re of course still lacking data.
trollbridge•1h ago
I very much doubt that when individual programmers are producing significantly more code with the help of AI that somehow the review process simultaneously scales up to perform adequate review of all of that extra code.

In my experience, review was inadequate back before we had AI spewing forth code of dubious quality. There's no reason to think it's any better now.

An actually-useful AI would be one that would make reviews better, do them itself, or at least help me get through reviews faster.

aitchnyu•1m ago
Can we have a linter for both high verbosity/repetitiveness and high terseness? I know copy-paste detector and cognitive complexity calculator linters are related. I recently generated code that interleaved spreadsheet worksheets (multiple of them) and cell formatting boilerplate with querying data. I asked AI to put the boilerplate into another class and expose .write_balance_row() and it did it perfectly. If a tool reported it, huge changes dont have to reach human reviewers and AIs can iterate and pass the linter.
lmm•59m ago
> They averaged producing 47% more code on the AI tasks, but took only 20% more time. The report here biases over these considerations, but I’m left wondering: was the extra code superfluous or did this produce better structure / managed debt better? If that extra 47% of code translates to lower debt and more consistent throughput over the long term, I might take it, given how crushed projects get from debt.

Wouldn't it be the opposite? I'd expect the code would be 47% longer because it's worse and heavier in tech debt (e.g. code repeated in multiple places instead of being factored out into a function).

kylecazar•4h ago
Now do a study that specifically gauges how useful an LLM (including smart tab completion) is for a frontend dev working in react/next/tailwind on everyday Jira tickets.

These were maintainers of large open source projects. It's all relative. It's clearly providing massive gains for some and not as much for others. It should follow that it's benefit to you depends on who you are and what you are working on.

It isn't black and white.

cheeze•4h ago
As a backend dev who owns a few internal crappy frontends, LLMs have been the best thing ever. Code quality isn't the top priority, I just need to plumb some data to an internal page at BigCorp.
distalx•3h ago
Could you share more about your process and how they specifically help you with your internal frontends? Any details would be great! Thanks!
franciscop•3h ago
It's a very well controlled study about... what the study claims to do. Yes, they didn't study a different thing, for _many_ reasons. Yes, we shouldn't haphazardly extrapolate to other parts of Engineering. But it looks like it's a good study nonetheless.

There are some very good findings though, like how the devs thought they were sped up but they were actually slowed down.

xarope•4h ago
I think this for me is the most worrying: "You can see that for AI Allowed tasks, developers spent less time researching and writing code".

My analogy to this is seeing people spend time trying to figure out how to change colors, draw shapes in powerpoint, rather than focus on the content and presentation. So here, we have developers now focusing their efforts on correcting the AI output, rather than doing the research and improving their ability to deliver code in the future.

Hmm...

hammyhavoc•4h ago
This has been my observation too. It's a tool for the lazy.
yukai•3h ago
laziness is a driving force of progress
anon15123•3h ago
in what direction
baq•32m ago
All of them.
jack_pp•1h ago
You can say the same about a printer. Or a kindle, oh you're too lazy to carry around 5 books with you?
ido•1m ago
Us lazies need tools too!
seanmcdirmid•3h ago
It can get over some mental blocks, having some code to look at can start the idea process even it’s wrong (just like for writing). I don’t think it’s bad, like I don’t think writing throw away code for prototyping is a bad way to start a project that you aren’t sure how to tackle. Waterfall (lots of research and design up front) is still not going to work even if you forgo AI.
skissane•2h ago
I find I’m most likely to use an LLM to generate code in certain specific scenarios: (i) times I’m suffering from “writer’s block” or “having trouble getting started”; (ii) a language or framework I don’t normally use; (iii) feeling tired/burnt out/demotivated

When I’m in the “zone” I wouldn’t go near an LLM, but when I’ve fallen out of the “zone” they can be useful tools in getting me back into it, or just finishing that one extra thing before signing off for the day

I think the right answer to “does LLM use help or hinder developer productivity” is “it depends on how you use them”

dearilos•4h ago
I found that early and often code reviews can offset the reduction in productivity. A good code review process can fix this.
strangescript•4h ago
This entire concept hinges on AI not getting better. If you believe AI is going continue to get better at the current ~5-10% a month range, then hand waiving over developer productivity today is about the same thing as writing an article about the internet being a fad in 1999.
trashchomper•4h ago
On the flip side, why would I use AI today if it presents no immediate benefit. Why not wait 5 years and see if it becomes actually helpful.
strangescript•3h ago
better yet, wait 10, let me know how it goes
benrutter•2h ago
If they do improve at 5-10% a month then that'd definitely be true (tbh I'm not sure they are even improving at that rate now - 10% for a year would be 3x improvement with compounding).

I guess the tricky bit is, nobody knows what the future looks like. "The internet is a fad" in 1999 hasn't aged well, but a lot of people touted 1960s AI, XML and 3d telivisions as things that'd be the tools in only a few years.

We're all just guessing till then.

dismalaf•2h ago
I find LLMs are decent at regurgitating boilerplate. Basically the same kind of stuff you could google then copy-paste... AI chatbots, now that they have web access, are also good at going over documentation and save you a little time searching through the docs yourself.

They're not great at business logic though, especially if you're doing anything remotely novel. Which is the difficult part of programming anyway.

But yeah, to the average corporate programmer who needs to recreate the same internal business tool that every other company has anyway, it probably saves a lot of time.

trollbridge•1h ago
They're great at helping me figure out how to make something work with a poorly-documented, buggy framework, which is indeed a large fraction of my job, whether I like it or not.
calrain•2h ago
I've been using Claude Code heavily for about 3 months now, and I'm pretty sure I'm between 10 and 20 times more productive while using it.

How I measure performance is how many features I can implement in a given period of time.

It's nice that people have done studies and have opinions, but for me, it's 10x to 20x better.

fuomag9•1h ago
Same, I’ve done stuff that should have taken me 2-3 weeks in days
zsoltkacsandi•1h ago
I have exactly the same experience.
benreesman•1h ago
I find the swings to be wild, when you win with it, you win really big. But when you lose with it, it's a real bite out of your week too. And I think 10x to 20x has to be figurative right, you can do 20x by volume maybe, but to borrow an expression from Steve Ballmer, that's like measuring an airplane by kilograms.

Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious? Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.

But even if it's the last 5-20% after you're already operating at your very limit and trying to hit your limit every single day is massive, it makes a bunch of stuff on the bubble go from "not realistic" to "we did that".

calrain•1h ago
There are definitely swings. Last night it took about 2 hours to get Monaco into my webpack built bootstrap template, it came down to CSS being mishandled and Claude couldn't see the light. I just pasted the code into ChatGPT o3 and it fixed it first try. I pasted the output of ChatGPT into Claude and viola, all done.

A key skill is to sense when the AI is starting to guess for solutions (no different to human devs) and then either lean into another AI or reset context and start over.

I'm finding the code quality increase greatly with the addition of the text 'and please follow best practices because will be pen tested on this!' and wow.. it takes it much more seriously.

cluckindan•29m ago
Is there a way to have two agentic AIs do pair programming?
nottorp•1m ago
Doesn't sound like you were writing actual functionality code, just integrating libraries?
jack_pp•1h ago
Let's be serious, what percentage of devs are doing "high complexity, high cognitive load, detail intense" work?
baq•50m ago
All of them, some just don’t notice, don’t care or don’t know this line of work is like that. Look at how junior devs work vs really experienced, self-aware engineers. The latter routinely solve problems the former didn’t know existed.
jack_pp•40m ago
What does being experienced in a field of work have to do with self awareness?

Also I disagree. For web dev atleast, most people are just rewriting the same stuff in a different order. Even though the entire project might be complex from a high level perspective, when you dive into the components or even just a single route it ain't "high complexity" at all and since I believe most jobs are in web / app dev which just recycles the same code over and over again that's why there's a lot of people claiming huge boosts to productivity.

TeMPOraL•18m ago
> Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious?

When you zoom in, even this kind of work isn't uniform - a lot of it is still shaving yaks, boring chores, and tasks that are hard dependencies for the work that is truly cognitively demanding, but themselves are easy(ish) annoyances. It's those subtasks - and the extra burden of mentally keeping track of them - that sets the limit of what even the most skilled, productive engineer can do. Offloading some of that to AI lets one free some mental capacity for work that actually benefits from that.

> Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.

Not true if you use it right.

You're probably following the "grug developer" philosophy, as it's popular these days (as well as "but think of the juniors!", which is the perceived ideal in the current zeitgeist). By design, this turns coding into boring, low-cognitive-load work. Reviewing such code is, thus, easier (and less demoralizing) than writing it.

20x is probably a bit much across the board, but for the technical part, I can believe it - there's too much unavoidable but trivial bullshit involved in software these days (build scripts, Dockerfies, IaaS). Preventing deep context switching on those is a big time saver.

sph87•1h ago
Where I have found Claude most helpful is on problems with very specific knowledge requirements.

Like: Why isn’t this working? Here Claude read this like 90 page PDF and tell me where I went wrong interfacing with this SDK.

Ohh I accidentally passed async_context_background_threading_safe instead of async_context_thread_safe_poll and it’s so now it’s panicking. Wow that would have taken me forever.

DeepYogurt•1h ago
Have any open source work you can show off?
calrain•58m ago
Unfortunately not, but ensuring the final code quality will be well written is a challenge I am putting off for now.

I'm leaning into the future growth of AI capabilities to help me here, otherwise I'll have to do it myself.

That is a tomorrow problem, too much project structure/functionality to get right first.

gtsop•54m ago
I cringe when I see these numbers. 20 times better means that you can accomplish in two months what you would do in 4 years, which is ridiculus when said out loud. We can make it even more ridiculous by pointing out you would do in 3 years the work of working lifetime (60 years)

I am wondering, what sort of tasks are you seeing these x20 boost?

baq•45m ago
It isn’t ridiculous, it’s easily true, especially when you’re experienced in general, but have little to no knowledge of this particular big piece of tech, like say you’ve stopped doing frontend when jquery was all there was and you’re coming back. I’m doing things with react in hours I would have no business doing in weeks a couple years ago.
calrain•25m ago
It is amazing, cringe all you want :)

I scoped out a body of work and even with the AI assisting on building cards and feature documentation, it came to about 2 to 4 weeks to implement.

It was done in 2 days.

The key I've found with working as fast as possible is to have planning sessions with Claude Code and make it challenge you and ask tons of questions. Then get it to break the work into 'cards' (think Jira, but they are just .md files in your repo) and then maintain a todo.md and done.md file pair that sorts and organizes work flow.

Then start a new context, tell it to review todo.md and pick up next task, and burn through it, when done, commit and update todo.md and done.md, /compact and you're off on the next.

It's more than AI hinting at what to do, it's a whole new way of working with rigor and structure around it. Then you just focus fire on the next card, and the next, and if you ever think up new features, then card it up and put it in the work queue.

congaliminal•33m ago
You're getting 6 months worth of work done in a week?
shinycode•18m ago
I bet with a co-worker that a migration from angular 15 to angular 19 could be done really fast avoiding months. I spent a whole evening on it and Claude code have never been able to pull off a migration from 15 to 16 on its own. A total waste of time and nothing worked. I had the surprise that it cost me 275$ for nothing. So maybe for greenfield projects it’s smooth and saves time but it’s not a silver bullet on projects with problems.
congaliminal•1m ago
Yeah so much of the dialogue is "I did this thing real fast that I didn't know how to do before" and "it will keep getting better without upper bounds" make the whole debate rather tiresome.

Just yesterday I had some AI bro but into a conversation I was having with a good friend, and besides other rudeness he got proper mad because I kept telling him I understand what the models do and I am skeptical of his claims of future progress. All his arguments boiled down to wishful thinking and extending the line and he would NOT believe I understand the systems he was describing. I am still a bit sore about it. These guys are literally getting out of their way to spam me with their bs in real life now.

iammrpayments•59s ago
I can’t believe such numbers. If this was true why don’t you quit your job and vibe code 10 ios apps