frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
411•klaussilveira•5h ago•93 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
765•xnx•10h ago•464 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
29•SerCe•1h ago•24 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
136•isitcontent•5h ago•14 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
128•dmpetrov•6h ago•53 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
35•quibono•4d ago•2 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
240•vecti•7h ago•114 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
61•jnord•3d ago•4 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
307•aktau•12h ago•152 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
308•ostacke•11h ago•84 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
167•eljojo•8h ago•123 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
385•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
313•lstoll•11h ago•230 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
47•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
103•vmatsiiako•10h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
177•i5heu•8h ago•128 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
13•gfortaine•3h ago•0 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
231•surprisetalk•3d ago•30 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
968•cdrnsf•15h ago•414 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
139•limoce•3d ago•79 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
39•rescrv•13h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
34•lebovic•1d ago•11 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
7•kmm•4d ago•0 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
76•antves•1d ago•56 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
34•ray__•2h ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
17•MarlonPro•3d ago•3 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
38•nwparker•1d ago•8 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
101•coloneltcb•2d ago•69 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
25•betamark•12h ago•23 comments

The Beauty of Slag

https://mag.uchicago.edu/science-medicine/beauty-slag
31•sohkamyung•3d ago•3 comments
Open in hackernews

State of AI-assisted software development

https://blog.google/technology/developers/dora-report-2025/
95•meetpateltech•4mo ago

Comments

wiz21c•4mo ago
> This indicates that AI outputs are perceived as useful and valuable by many of this year’s survey respondents, despite a lack of complete trust in them.

Or the respondents have hard times admitting AI can replace them :-)

I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)

polotics•4mo ago
Well when I use a power screwdriver I am always impressed by how much more quickly I can finish easy tasks too. I also occasionally busted a screw or three, that then I had to drill out...
surgical_fire•4mo ago
In a report from Google, who is heavily invested in AI becoming the future, I actually expect the respondents to sound more positive about AI than they actually are

Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.

bluefirebrand•4mo ago
Everyone lying to their bosses about how useful AI is has placed us all in a prisoner's dilemma where we all have to lie or we get replaced

If only people could be genuinely critical without worrying they will be fired

surgical_fire•4mo ago
I agree. I also don't make the rules.

And to be honest, I don't really care. It is a very comfortable position to be in. Allow me to explain:

I genuinely believe AI poses no threat to my employment. I identify the only medium term threat the very likely economic slowdown in the coming years.

Meanwhile, I am happy to do this silly dance while companies waste money and resources on what I see as a dead-end, wasteful technology.

I am not here to make anything better.

hu3•4mo ago
I also find it great for prompts like:

"this function should do X, spot inconsistencies, potential issues and bugs"

It's eye opening sometimes.

cogman10•4mo ago
So long as you view AI as a sometimes competent liar, then it can be useful.

I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.

However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.

And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

fluoridation•4mo ago
>AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?

* A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.

* A developer who understood the problem but failed to implement the algorithm that solves it.

* An algorithm was used that doesn't solve the problem.

* The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.

* Everything used to be correct, but an environment change made it so the correct solution stopped being correct.

In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.

pluc•4mo ago
Straight code writing has never been the problem - it's the understanding of said code that is. When you rely on AI, and AI creates something, it might increase productivity immediately but once you need to debug something that uses that piece of code, it will nullify that gain as you have no idea where to look. That's just one aspect of this false equivalency.
bopbopbop7•4mo ago
Or you aren’t as good as you think you are :-)

Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.

bitwize•4mo ago
We may see a return to the days when businesses relied on systems analysts, not programmers, to design their information systems—except now, the programming work will be left to the machines.
zwieback•4mo ago
I find AI coding assistants useful when I'm using a new library or language feature I'm not super familiar with.

When I have AI generate code using features I'm very familiar with I can see that it's okay but not premium code.

So it makes sense that I feel more productive but also a little skeptical.

apt-apt-apt-apt•4mo ago
When I see the fabulous images generated by AI, I can't help but wonder how artists feel.

Anyone got a pulse on what the art community thinks?

fluoridation•4mo ago
Generally speaking, they don't like their public posts being scraped to train AIs, and they don't like accounts that post AI output without disclosing it.
pluc•4mo ago
Every study I've read says nobody is seeing productivity gains from AI use. Here's an AI vendor saying the opposite. Funny.
azdle•4mo ago
It's not even claiming that. It's only claiming that people who responded to the survey feel more productive. (Unless you assume that people taking this survey have an objective measure for their own productivity.)

> Significant productivity gains: Over 80% of respondents indicate that AI has enhanced their productivity.

_Feeling_ more productive is inline with the one proper study I've seen.

Foobar8568•4mo ago
Well I feel and I am more productive, now on coding activities, I am not convinced, it basically replaced SO and google, but at the end of the day, I always need and want to check reference material that I may have known or not existed. Plenty of time, Google couldn't even find them.

So in my case, yes but not on activities these sellers are usually claiming.

thebigspacefuck•4mo ago
The METR study showed even though people feel more productive they weren’t https://arxiv.org/abs/2507.09089
knes•4mo ago
the MTR study is a joke. it surveyed only 16 devs. in the era of Sonnet 3.5

Can we stop citing this study

I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently (between June 13 and July 21, 2025) which means using the most recent SOTA models

bopbopbop7•4mo ago
Yea cite the study funded by a company that invested billions into AI instead, that will surely be non biased and accurate.
capnrefsmmat•4mo ago
It didn't "survey" devs. It paid them to complete real tasks while they were randomly assigned to use AI or not, and measured the actual time taken to complete the tasks vs. just the perception. It is much higher quality evidence than a convenience sample of developers who just report their perceptions.
rsynnott•4mo ago
> I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently

It's asking a completely different question; it is a survey of peoples' _perceptions of their own productivity_. That's basically useless; people are notoriously bad at self-evaluating things like that.

Pannoniae•4mo ago
There's a few explanations for this, and it's not necessarily contradictory.

1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

2. AI does improve productivity, but only if you find your own workflow and what tasks it's good for, and many companies try to shoehorn it into things which just don't work for it.

3. AI does improve productivity, but people aren't incentivised to improve their productivity because they don't see returns from it. Hence, they just use it to work less and have the same output.

4. The previous one but instead of working less, they work at a more leisurely pace.

5. AI doesn't improve producivity, people just feel it's more productive because it requires less cognitive effort to use than actually doing the task.

Any of these is plausible, yet they have massively different underlying explanations.... studies don't really show why that's the case. I personally think it's mostly 2. and 3., but it could really be any of these.

mlinhares•4mo ago
Why not all? I've seen them all play out. There's also the people that are downstream of AI slop that feel less productive because now they have to clean up the shit other people produced.
Pannoniae•4mo ago
You're right, it kinda depends on the situation itself! And the downstream effects. Although, I'd argue that the one you're talking about isn't really caused by AI itself, that's squarely a "I can't say no to the slop because they'll take my head off" problem. In healthy places, you would just say "hell no I'm not merging slop", just as you have previously said "no I'm not merging shit copypasted from stackoverflow".
welshwelsh•4mo ago
I think it's 5.

I was very impressed when I first started using AI tools. Felt like I could get so much more done.

A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself.

enobrev•4mo ago
This is what slows me down most. The initial implementation of a well defined task is almost always quite fast. But then it's a balance of either...

* Checking it closely myself, which sometimes takes just as long as it would have taken me to implement it in the first-place, with just about as much cognitive load since I now have to understand something I didn't write

* OR automating the checking by pouring on more AI, and that takes just as long or longer than it would have taken me to check it closely myself. Especially in cases where suddenly 1/3 of automated tests are failing and it either needs to find the underlying system it broke or iterate through all the tests and fix them.

Doing this iteratively has made the overall process for an app I'm trying to implement 100% using LLMs to take at least 3x longer than I would have built it myself. That said, it's unclear I would have kept building this app without using these tools. The process has kept me in the game - so there's definitely some value there that offsets the longer implementation time.

ACCount37•4mo ago
"People use AI to do the same tasks with less effort" maps onto what we've seen with other types of workplace automation - like Excel formulas or VBA scripts.

Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort?

jdiff•4mo ago
That aligns well with past ideas, but it doesn't align with the studies that have been performed, where there aren't any of the conflicting priorities you mention.
pydry•4mo ago
>1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something.

This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive.

rsynnott•4mo ago
(1) seems very plausible, if only because that is what happens with ~everything which promises to improve productivity. People are really bad at self-evaluating how productive they are, and productivity is really pretty hard to externally measure.
HardCodedBias•4mo ago
(3) and (4) are likely true.

In theory competition is supposed to address this.

However, our evaluation processes generally occur on human and predictable timelines, which is quite slow compared to this impulse function.

There was a theory that inter firm competition could speed this clock up, but that doesn't seem plausible currently.

Almost certainly AI will be used, extensively, for reviews going forward. Perhaps that will accelerate the clock rate.

DenisM•4mo ago
6. It’s now easier to get something off the ground but structural debt accumulates invisibly. The inevitable cleanup operation happens outside of the initial assessed productivity window. If you expand the window across time and team boundaries the measured productivity reverts to the mean.

This options is insidious in that not only people initially asked about the effect are initially oblivious, it is very beneficial for them to deny the outcome altogether. Individual integrity may or may not overcome this.

fritzo•4mo ago
2,3,4. While my agent refactors code, I do housework: fold laundry, wash dishes, stack firewood, prep food, paint the deck. I love this new life of offering occasional advice, then walking around and using my hands.
thinkmassive•4mo ago
What's the difference between 1 & 5?

I've personally witnessed every one of these, but those two seem like different ways to say the same thing. I would fully agree if one of them specified a negative impact to productivity, and the other was net neutral but artificially felt like a gain.

rsynnott•4mo ago
This seems to be a poll of _users_. "Do people think it has improved _their_ productivity?" is a very different question to "Has it empirically improved aggregate productivity of a team/company/industry." People think _all_ sorts of snake oil improve their productivity; you can't trust people to self-report on things like this.
Fokamul•4mo ago
2026, year of cybersecurity. Baby, let's goo :D
righthand•4mo ago
> AI adoption among software development professionals has surged to 90%

I am proudly part of the 10%!

riffic•4mo ago
DORA stands for "DevOps Research and Assessment" in case anyone was curious.

https://en.wikipedia.org/wiki/DevOps_Research_and_Assessment

mormegil•4mo ago
I was confused, since DORA is also the EU Digital Operational Resilience Act.
riffic•4mo ago
That's why it's always worth expanding acronyms in my opinion.
nlunbeck•4mo ago
DORA, recently, has been moving towards its own sphere outside of DevOps, hence why the acronym isn't usually expanded. So many of the core principles of DevOps (communication, collaboration, working across teams, etc) have impact beyond the DevOps discipline. DORA has been venturing into platform, DevEx, AI, etc.

From last year's DORA report:

"We are committed to the fundamental principles that have always been a part of the DevOps movement: culture, collaboration, automation, learning, and using technology to achieve business goals. Our community and research benefit from the perspectives of diverse roles, including people who might not associate with the "DevOps" label. You should expect to see the term "DevOps" moving out of the spotlight."

dionian•4mo ago
so the whole thing is about AI?
jdiff•4mo ago
...The article titled "How are developers using AI?" tucked behind a link labeled "State of AI-assisted software development"?

Yes, it's about AI. I'm interested to know what you were expecting. Was it titled or labeled differently 11 hours ago?

nadis•4mo ago
If I recall correctly, last year's DORA report was the first to include AI-specific sections. I don't mind it being about AI but it potentially is a bit of a shift for DORA to be more explicit in that focus vs. it being a section of a broader state of development report.

Although if the adoption of AI is as high among developers as the DORA report found, then perhaps those are effectively the same thing nowadays.

philipwhiuk•4mo ago
What the heck is that "DORA AI Capabilities Model" diagram trying to show.
kemayo•4mo ago
I'm curious what their sample set for this survey was, because "90% of software developers use AI, at a median of 2 hours a day" is more than I'd have expected.

(But maybe I'm out of touch!)

karakot•4mo ago
well, just assume it's an IDE with 'smarter' autosuggest.
kemayo•4mo ago
That's fair -- the vibe of the post was making me think of the more "Claude, write a function to do X" style of development, but a bunch of people answering the survey with "oh yeah, Xcode added that new autocomplete didn't it?" would do a lot to get us to that kind of number.
philipwhiuk•4mo ago
I’ve always assumed that’s the point of these things. Ask a broad question that will allow you to write a puffy blogpost that backs your conclusion and then write it in a way that pushes your tools.

The amount of free training coming out on AI shows just how keen they are to push adoption to meet their targets.

Eventually these training will no longer be free as they pivot to profit.

xgbi•4mo ago
Rant mode on.

For the second time of the week this morning, I spent 45 min reviewing a merge request where the guy has no idea what he did, didn’t test, and let the llm hallucinate a very bad solution to a simple problem.

He just had to read the previous commit, which introduced the bug, and think about it for 1min.

We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

Honestly I think AI is just a very very sharp knife. We’re going to regret this just like regretting the mass offshoring in the 2000s.

rhubarbtree•4mo ago
Yes, we created them with social media. Lots of people on this site did that by working for the social media companies.

AI usage like that is a symptom not the problem.

SamuelAdams•4mo ago
> We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

This has nothing to do with AI, and everything to do with a bad hire. If the developer is that bad with code, how did they get hired in the first place? If AI is making them lazier, and they refuse to improve, maybe they ought to be replaced by a better developer?

Archelaos•4mo ago
Why did you I spent 45 min reviewing instead of outright rejecting it? (Honest question.)
GuardianCaveman•4mo ago
He didn’t read it first either apparently
xgbi•4mo ago
Cause the codebase wasn't in my scope originally and I had to review in emergency due to a regression in production. I took the time to understand the issue at hand and why the code had to change.

To be clear, the guy moved back a Docker image from being non-root (user 1000), to reusing a root user and `exec su` into the user after doing some root things in the entrypoint. The only issue is that when looking at the previous commit, you could see that the K8S deployment using this image wrongly changed the userId to be 1000 instead of 1001.

But since the coding guy didn't take the time to take a cursory look at why working things started to not work, he asked the LLM "I need to change the owner of some files so that they are 1001" and the LLM happily obliged by using the most convoluted way (about 100 lines of code change).

The actual fix I suggested was:

    securityContext:
  -    runAsUser: 1000
  +    runAsUser: 1001
Archelaos•4mo ago
Thank you for your explanation. I wondered what might motivate someone to devote so much time to something like this. An emergency due to a regression in production is, of course, a valid reason. And also thank you for sharing the details. It brought a sarcastic smile to my face.
dawnerd•4mo ago
I've just started immediately rejecting AI pull requests. I don't have time for that.

There's going to be a massive opportunity for agencies that are skilled enough to come in and fix all of this nonsense when companies realize what they've invested in.

kemayo•4mo ago
Almost worse is AI bug reports. I've gotten a few of them on GitHub projects, where someone clearly pasted an error message into ChatGPT and asked it to write a bug report... and they're incoherent.
fluoridation•4mo ago
Some are using them to hunt bug bounties too. The CURL developer has complained about dealing with a deluge of bullshit reports that contain no substance. I watched a video the other day that demonstrated an example of a report of a buffer overflow. TL;DR: Code was generated by some means that included the libcurl header and called strlen() on a buffer with no null terminator, and that's all it did. It triggered ASAN and a report was generated from that, talking about how a remote website could overflow a buffer in the client's cookies using a crafted response. Mind you, the code didn't even call into libcurl once.
JLO64•4mo ago
I'm not surprised to see reports like this for open source projects where the bar for contributing is relatively low, but am surprised to see it in the workplace. You'd imagine that devs like that would be filtered out via the hiring process...

I'm a coding tutor and the most frustrating part of my job is when my students use LLM generated code. They have no clue what the code does (or even what libraries they're using) and just care about the pretty output. Whenever I try asking them questions about the code one of them responded verbatim "I dunno" and continued prompting ChatGPT (I ditched that student afterward). Something like Warp where the expectation is to not even interact with the terminal is equally bad as far as I'm concerned since students won't have any incentive to understand what's under the hood of their GUIs.

To be clear, I don't mind people using LLMs to code (I use them to code my SaaS project) but what I do mind is them not even trying to understand wtf is on their screen. This new breed of vibe coders are going to be close to useless in real world programming jobs which when combined with the push targeted at kids that "coding is the future" is going to result in a bunch of below mediocre devs both flooding the market and struggling to find employment.

xgbi•4mo ago
Same, I use LLMs to figure out the correct options to pass in the AZ or the AWS CLI, or some low-key things. I still code on my own.

But our management has drunk the Kool Aid and has now everybody obliged to use Copilot or other LLM assists.

saulpw•4mo ago
> You'd imagine that devs like that would be filtered out via the hiring process...

...except when the C-suite is pressuring the entire org to use AI tools. Then these people are blessed as the next generation of coders.

driverdan•4mo ago
> We are creating young people that have a very limited attention span

This isn't about age. I'm in my 40's and my attention span seems to have gotten worse. I don't use much social media anymore either. I see it in other people too regardless of age.

saulpw•4mo ago
Same. What do you think it's about? Future shock? Smartphone use (separate from social media)? Singularity overwhelm? Long Covid?
akomtu•4mo ago
When neuralink becomes usable, the same hordes of people will rush to install the AI plugin so it can relieve their brains from putting in any effort. The rest will be given a difficult choice: do the same or become unemployable in the new AI economy.
bluefirebrand•4mo ago
I can't wait until people are writing malware that targets neuralink users with brain death

Cyberpunk future here we come baby

tmaly•4mo ago
there is a temptation to fight AI slop with AI slop
signatoremo•4mo ago
Your rant is misplaced. It should be placed on hiring — candidates screening, on training — getting junior developers ready for their job, on engineering - code review and testing, and so on.

If anything, AI helps expose shortcomings of companies. The strong ones will fix them. The weak ones will languish.

jdiff•4mo ago
Assuming you're right, I don't believe the effect will be at all dramatic. The vast majority of businesses are not in breakneck, life-or-death, do-or-die competition. The vast majority of business do quite a lot of languishing in a variety of areas, and yet they keep their clients and customers and even continue to grow despite not just languishing, but solid leaps backwards and even direct shots to the foot.

How do you propose that AI will do what you suggest, exposing shortcomings of companies? Right now, when it's being implemented, it's largely dictates from above with little but FOMO driving it, no cohesive direction to guide its use.

cloverich•4mo ago
Its puzzling to me that people are still debating productivity after its been good enough to quantify for a while now.

My (merged) PR rate is up about 3x since i started using claude code over the course of a few months. I correspondingly feel more productive and that i have a good grasp of what it can and cannot do. I definitely see some people use it wrong. I also see it fail on some tasks id expect it to succeed at, such as abstracting a singleton in an ios app i am tinkering with, that suggests its not merely operator error but also that its skill is uneven dep on task, ecosystem, and language.

I am curious for those that use it regularly, have you measured your actual commit rates? Thats ofc still not the same as measuring long term valuable output, but were still a ways off from being able to determime that imho.

surgical_fire•4mo ago
Measuring commit rates is a bad metric. It varies depending on the scope and complexity of what I am doing, and the size of individual commits.

I can increase dramatically my number of commits by breaking up my commits in very small chunks.

Typically when I am using AI I tend to reduce a lot the scope of a commit to make it more focused and easier to handle.

dbs•4mo ago
No need for evidence of net benefits to get mass adoption. We have mass adoption of digital touchpads in cars despite evidence they are not safe. We have widespread adoption of open spaces despite evidence of them not increasing productivity..
alok-g•4mo ago
>> Our research this year also found that AI can act as a "mirror and a multiplier.” In cohesive organizations, AI boosts efficiency. In fragmented ones, it highlights weaknesses.

This is interesting -- It's helping in some cases and possibly worsening in some others. Does anyone have the details? (I haven't looked into the report as yet.) Thanks.