frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•3m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•7m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•16m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•23m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
1•neogoose•26m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•26m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
2•sizzle•26m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•27m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•28m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•28m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•34m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•42m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•43m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•46m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
3•pabs3•48m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•49m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•50m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•51m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•55m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments
Open in hackernews

State of AI-assisted software development

https://blog.google/technology/developers/dora-report-2025/
95•meetpateltech•4mo ago

Comments

wiz21c•4mo ago
> This indicates that AI outputs are perceived as useful and valuable by many of this year’s survey respondents, despite a lack of complete trust in them.

Or the respondents have hard times admitting AI can replace them :-)

I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)

polotics•4mo ago
Well when I use a power screwdriver I am always impressed by how much more quickly I can finish easy tasks too. I also occasionally busted a screw or three, that then I had to drill out...
surgical_fire•4mo ago
In a report from Google, who is heavily invested in AI becoming the future, I actually expect the respondents to sound more positive about AI than they actually are

Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.

bluefirebrand•4mo ago
Everyone lying to their bosses about how useful AI is has placed us all in a prisoner's dilemma where we all have to lie or we get replaced

If only people could be genuinely critical without worrying they will be fired

surgical_fire•4mo ago
I agree. I also don't make the rules.

And to be honest, I don't really care. It is a very comfortable position to be in. Allow me to explain:

I genuinely believe AI poses no threat to my employment. I identify the only medium term threat the very likely economic slowdown in the coming years.

Meanwhile, I am happy to do this silly dance while companies waste money and resources on what I see as a dead-end, wasteful technology.

I am not here to make anything better.

hu3•4mo ago
I also find it great for prompts like:

"this function should do X, spot inconsistencies, potential issues and bugs"

It's eye opening sometimes.

cogman10•4mo ago
So long as you view AI as a sometimes competent liar, then it can be useful.

I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.

However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.

And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

fluoridation•4mo ago
>AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?

* A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.

* A developer who understood the problem but failed to implement the algorithm that solves it.

* An algorithm was used that doesn't solve the problem.

* The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.

* Everything used to be correct, but an environment change made it so the correct solution stopped being correct.

In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.

pluc•4mo ago
Straight code writing has never been the problem - it's the understanding of said code that is. When you rely on AI, and AI creates something, it might increase productivity immediately but once you need to debug something that uses that piece of code, it will nullify that gain as you have no idea where to look. That's just one aspect of this false equivalency.
bopbopbop7•4mo ago
Or you aren’t as good as you think you are :-)

Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.

bitwize•4mo ago
We may see a return to the days when businesses relied on systems analysts, not programmers, to design their information systems—except now, the programming work will be left to the machines.
zwieback•4mo ago
I find AI coding assistants useful when I'm using a new library or language feature I'm not super familiar with.

When I have AI generate code using features I'm very familiar with I can see that it's okay but not premium code.

So it makes sense that I feel more productive but also a little skeptical.

apt-apt-apt-apt•4mo ago
When I see the fabulous images generated by AI, I can't help but wonder how artists feel.

Anyone got a pulse on what the art community thinks?

fluoridation•4mo ago
Generally speaking, they don't like their public posts being scraped to train AIs, and they don't like accounts that post AI output without disclosing it.
pluc•4mo ago
Every study I've read says nobody is seeing productivity gains from AI use. Here's an AI vendor saying the opposite. Funny.
azdle•4mo ago
It's not even claiming that. It's only claiming that people who responded to the survey feel more productive. (Unless you assume that people taking this survey have an objective measure for their own productivity.)

> Significant productivity gains: Over 80% of respondents indicate that AI has enhanced their productivity.

_Feeling_ more productive is inline with the one proper study I've seen.

Foobar8568•4mo ago
Well I feel and I am more productive, now on coding activities, I am not convinced, it basically replaced SO and google, but at the end of the day, I always need and want to check reference material that I may have known or not existed. Plenty of time, Google couldn't even find them.

So in my case, yes but not on activities these sellers are usually claiming.

thebigspacefuck•4mo ago
The METR study showed even though people feel more productive they weren’t https://arxiv.org/abs/2507.09089
knes•4mo ago
the MTR study is a joke. it surveyed only 16 devs. in the era of Sonnet 3.5

Can we stop citing this study

I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently (between June 13 and July 21, 2025) which means using the most recent SOTA models

bopbopbop7•4mo ago
Yea cite the study funded by a company that invested billions into AI instead, that will surely be non biased and accurate.
capnrefsmmat•4mo ago
It didn't "survey" devs. It paid them to complete real tasks while they were randomly assigned to use AI or not, and measured the actual time taken to complete the tasks vs. just the perception. It is much higher quality evidence than a convenience sample of developers who just report their perceptions.
rsynnott•4mo ago
> I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently

It's asking a completely different question; it is a survey of peoples' _perceptions of their own productivity_. That's basically useless; people are notoriously bad at self-evaluating things like that.

Pannoniae•4mo ago
There's a few explanations for this, and it's not necessarily contradictory.

1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

2. AI does improve productivity, but only if you find your own workflow and what tasks it's good for, and many companies try to shoehorn it into things which just don't work for it.

3. AI does improve productivity, but people aren't incentivised to improve their productivity because they don't see returns from it. Hence, they just use it to work less and have the same output.

4. The previous one but instead of working less, they work at a more leisurely pace.

5. AI doesn't improve producivity, people just feel it's more productive because it requires less cognitive effort to use than actually doing the task.

Any of these is plausible, yet they have massively different underlying explanations.... studies don't really show why that's the case. I personally think it's mostly 2. and 3., but it could really be any of these.

mlinhares•4mo ago
Why not all? I've seen them all play out. There's also the people that are downstream of AI slop that feel less productive because now they have to clean up the shit other people produced.
Pannoniae•4mo ago
You're right, it kinda depends on the situation itself! And the downstream effects. Although, I'd argue that the one you're talking about isn't really caused by AI itself, that's squarely a "I can't say no to the slop because they'll take my head off" problem. In healthy places, you would just say "hell no I'm not merging slop", just as you have previously said "no I'm not merging shit copypasted from stackoverflow".
welshwelsh•4mo ago
I think it's 5.

I was very impressed when I first started using AI tools. Felt like I could get so much more done.

A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself.

enobrev•4mo ago
This is what slows me down most. The initial implementation of a well defined task is almost always quite fast. But then it's a balance of either...

* Checking it closely myself, which sometimes takes just as long as it would have taken me to implement it in the first-place, with just about as much cognitive load since I now have to understand something I didn't write

* OR automating the checking by pouring on more AI, and that takes just as long or longer than it would have taken me to check it closely myself. Especially in cases where suddenly 1/3 of automated tests are failing and it either needs to find the underlying system it broke or iterate through all the tests and fix them.

Doing this iteratively has made the overall process for an app I'm trying to implement 100% using LLMs to take at least 3x longer than I would have built it myself. That said, it's unclear I would have kept building this app without using these tools. The process has kept me in the game - so there's definitely some value there that offsets the longer implementation time.

ACCount37•4mo ago
"People use AI to do the same tasks with less effort" maps onto what we've seen with other types of workplace automation - like Excel formulas or VBA scripts.

Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort?

jdiff•4mo ago
That aligns well with past ideas, but it doesn't align with the studies that have been performed, where there aren't any of the conflicting priorities you mention.
pydry•4mo ago
>1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something.

This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive.

rsynnott•4mo ago
(1) seems very plausible, if only because that is what happens with ~everything which promises to improve productivity. People are really bad at self-evaluating how productive they are, and productivity is really pretty hard to externally measure.
HardCodedBias•4mo ago
(3) and (4) are likely true.

In theory competition is supposed to address this.

However, our evaluation processes generally occur on human and predictable timelines, which is quite slow compared to this impulse function.

There was a theory that inter firm competition could speed this clock up, but that doesn't seem plausible currently.

Almost certainly AI will be used, extensively, for reviews going forward. Perhaps that will accelerate the clock rate.

DenisM•4mo ago
6. It’s now easier to get something off the ground but structural debt accumulates invisibly. The inevitable cleanup operation happens outside of the initial assessed productivity window. If you expand the window across time and team boundaries the measured productivity reverts to the mean.

This options is insidious in that not only people initially asked about the effect are initially oblivious, it is very beneficial for them to deny the outcome altogether. Individual integrity may or may not overcome this.

fritzo•4mo ago
2,3,4. While my agent refactors code, I do housework: fold laundry, wash dishes, stack firewood, prep food, paint the deck. I love this new life of offering occasional advice, then walking around and using my hands.
thinkmassive•4mo ago
What's the difference between 1 & 5?

I've personally witnessed every one of these, but those two seem like different ways to say the same thing. I would fully agree if one of them specified a negative impact to productivity, and the other was net neutral but artificially felt like a gain.

rsynnott•4mo ago
This seems to be a poll of _users_. "Do people think it has improved _their_ productivity?" is a very different question to "Has it empirically improved aggregate productivity of a team/company/industry." People think _all_ sorts of snake oil improve their productivity; you can't trust people to self-report on things like this.
Fokamul•4mo ago
2026, year of cybersecurity. Baby, let's goo :D
righthand•4mo ago
> AI adoption among software development professionals has surged to 90%

I am proudly part of the 10%!

riffic•4mo ago
DORA stands for "DevOps Research and Assessment" in case anyone was curious.

https://en.wikipedia.org/wiki/DevOps_Research_and_Assessment

mormegil•4mo ago
I was confused, since DORA is also the EU Digital Operational Resilience Act.
riffic•4mo ago
That's why it's always worth expanding acronyms in my opinion.
nlunbeck•4mo ago
DORA, recently, has been moving towards its own sphere outside of DevOps, hence why the acronym isn't usually expanded. So many of the core principles of DevOps (communication, collaboration, working across teams, etc) have impact beyond the DevOps discipline. DORA has been venturing into platform, DevEx, AI, etc.

From last year's DORA report:

"We are committed to the fundamental principles that have always been a part of the DevOps movement: culture, collaboration, automation, learning, and using technology to achieve business goals. Our community and research benefit from the perspectives of diverse roles, including people who might not associate with the "DevOps" label. You should expect to see the term "DevOps" moving out of the spotlight."

dionian•4mo ago
so the whole thing is about AI?
jdiff•4mo ago
...The article titled "How are developers using AI?" tucked behind a link labeled "State of AI-assisted software development"?

Yes, it's about AI. I'm interested to know what you were expecting. Was it titled or labeled differently 11 hours ago?

nadis•4mo ago
If I recall correctly, last year's DORA report was the first to include AI-specific sections. I don't mind it being about AI but it potentially is a bit of a shift for DORA to be more explicit in that focus vs. it being a section of a broader state of development report.

Although if the adoption of AI is as high among developers as the DORA report found, then perhaps those are effectively the same thing nowadays.

philipwhiuk•4mo ago
What the heck is that "DORA AI Capabilities Model" diagram trying to show.
kemayo•4mo ago
I'm curious what their sample set for this survey was, because "90% of software developers use AI, at a median of 2 hours a day" is more than I'd have expected.

(But maybe I'm out of touch!)

karakot•4mo ago
well, just assume it's an IDE with 'smarter' autosuggest.
kemayo•4mo ago
That's fair -- the vibe of the post was making me think of the more "Claude, write a function to do X" style of development, but a bunch of people answering the survey with "oh yeah, Xcode added that new autocomplete didn't it?" would do a lot to get us to that kind of number.
philipwhiuk•4mo ago
I’ve always assumed that’s the point of these things. Ask a broad question that will allow you to write a puffy blogpost that backs your conclusion and then write it in a way that pushes your tools.

The amount of free training coming out on AI shows just how keen they are to push adoption to meet their targets.

Eventually these training will no longer be free as they pivot to profit.

xgbi•4mo ago
Rant mode on.

For the second time of the week this morning, I spent 45 min reviewing a merge request where the guy has no idea what he did, didn’t test, and let the llm hallucinate a very bad solution to a simple problem.

He just had to read the previous commit, which introduced the bug, and think about it for 1min.

We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

Honestly I think AI is just a very very sharp knife. We’re going to regret this just like regretting the mass offshoring in the 2000s.

rhubarbtree•4mo ago
Yes, we created them with social media. Lots of people on this site did that by working for the social media companies.

AI usage like that is a symptom not the problem.

SamuelAdams•4mo ago
> We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

This has nothing to do with AI, and everything to do with a bad hire. If the developer is that bad with code, how did they get hired in the first place? If AI is making them lazier, and they refuse to improve, maybe they ought to be replaced by a better developer?

Archelaos•4mo ago
Why did you I spent 45 min reviewing instead of outright rejecting it? (Honest question.)
GuardianCaveman•4mo ago
He didn’t read it first either apparently
xgbi•4mo ago
Cause the codebase wasn't in my scope originally and I had to review in emergency due to a regression in production. I took the time to understand the issue at hand and why the code had to change.

To be clear, the guy moved back a Docker image from being non-root (user 1000), to reusing a root user and `exec su` into the user after doing some root things in the entrypoint. The only issue is that when looking at the previous commit, you could see that the K8S deployment using this image wrongly changed the userId to be 1000 instead of 1001.

But since the coding guy didn't take the time to take a cursory look at why working things started to not work, he asked the LLM "I need to change the owner of some files so that they are 1001" and the LLM happily obliged by using the most convoluted way (about 100 lines of code change).

The actual fix I suggested was:

    securityContext:
  -    runAsUser: 1000
  +    runAsUser: 1001
Archelaos•4mo ago
Thank you for your explanation. I wondered what might motivate someone to devote so much time to something like this. An emergency due to a regression in production is, of course, a valid reason. And also thank you for sharing the details. It brought a sarcastic smile to my face.
dawnerd•4mo ago
I've just started immediately rejecting AI pull requests. I don't have time for that.

There's going to be a massive opportunity for agencies that are skilled enough to come in and fix all of this nonsense when companies realize what they've invested in.

kemayo•4mo ago
Almost worse is AI bug reports. I've gotten a few of them on GitHub projects, where someone clearly pasted an error message into ChatGPT and asked it to write a bug report... and they're incoherent.
fluoridation•4mo ago
Some are using them to hunt bug bounties too. The CURL developer has complained about dealing with a deluge of bullshit reports that contain no substance. I watched a video the other day that demonstrated an example of a report of a buffer overflow. TL;DR: Code was generated by some means that included the libcurl header and called strlen() on a buffer with no null terminator, and that's all it did. It triggered ASAN and a report was generated from that, talking about how a remote website could overflow a buffer in the client's cookies using a crafted response. Mind you, the code didn't even call into libcurl once.
JLO64•4mo ago
I'm not surprised to see reports like this for open source projects where the bar for contributing is relatively low, but am surprised to see it in the workplace. You'd imagine that devs like that would be filtered out via the hiring process...

I'm a coding tutor and the most frustrating part of my job is when my students use LLM generated code. They have no clue what the code does (or even what libraries they're using) and just care about the pretty output. Whenever I try asking them questions about the code one of them responded verbatim "I dunno" and continued prompting ChatGPT (I ditched that student afterward). Something like Warp where the expectation is to not even interact with the terminal is equally bad as far as I'm concerned since students won't have any incentive to understand what's under the hood of their GUIs.

To be clear, I don't mind people using LLMs to code (I use them to code my SaaS project) but what I do mind is them not even trying to understand wtf is on their screen. This new breed of vibe coders are going to be close to useless in real world programming jobs which when combined with the push targeted at kids that "coding is the future" is going to result in a bunch of below mediocre devs both flooding the market and struggling to find employment.

xgbi•4mo ago
Same, I use LLMs to figure out the correct options to pass in the AZ or the AWS CLI, or some low-key things. I still code on my own.

But our management has drunk the Kool Aid and has now everybody obliged to use Copilot or other LLM assists.

saulpw•4mo ago
> You'd imagine that devs like that would be filtered out via the hiring process...

...except when the C-suite is pressuring the entire org to use AI tools. Then these people are blessed as the next generation of coders.

driverdan•4mo ago
> We are creating young people that have a very limited attention span

This isn't about age. I'm in my 40's and my attention span seems to have gotten worse. I don't use much social media anymore either. I see it in other people too regardless of age.

saulpw•4mo ago
Same. What do you think it's about? Future shock? Smartphone use (separate from social media)? Singularity overwhelm? Long Covid?
akomtu•4mo ago
When neuralink becomes usable, the same hordes of people will rush to install the AI plugin so it can relieve their brains from putting in any effort. The rest will be given a difficult choice: do the same or become unemployable in the new AI economy.
bluefirebrand•4mo ago
I can't wait until people are writing malware that targets neuralink users with brain death

Cyberpunk future here we come baby

tmaly•4mo ago
there is a temptation to fight AI slop with AI slop
signatoremo•4mo ago
Your rant is misplaced. It should be placed on hiring — candidates screening, on training — getting junior developers ready for their job, on engineering - code review and testing, and so on.

If anything, AI helps expose shortcomings of companies. The strong ones will fix them. The weak ones will languish.

jdiff•4mo ago
Assuming you're right, I don't believe the effect will be at all dramatic. The vast majority of businesses are not in breakneck, life-or-death, do-or-die competition. The vast majority of business do quite a lot of languishing in a variety of areas, and yet they keep their clients and customers and even continue to grow despite not just languishing, but solid leaps backwards and even direct shots to the foot.

How do you propose that AI will do what you suggest, exposing shortcomings of companies? Right now, when it's being implemented, it's largely dictates from above with little but FOMO driving it, no cohesive direction to guide its use.

cloverich•4mo ago
Its puzzling to me that people are still debating productivity after its been good enough to quantify for a while now.

My (merged) PR rate is up about 3x since i started using claude code over the course of a few months. I correspondingly feel more productive and that i have a good grasp of what it can and cannot do. I definitely see some people use it wrong. I also see it fail on some tasks id expect it to succeed at, such as abstracting a singleton in an ios app i am tinkering with, that suggests its not merely operator error but also that its skill is uneven dep on task, ecosystem, and language.

I am curious for those that use it regularly, have you measured your actual commit rates? Thats ofc still not the same as measuring long term valuable output, but were still a ways off from being able to determime that imho.

surgical_fire•4mo ago
Measuring commit rates is a bad metric. It varies depending on the scope and complexity of what I am doing, and the size of individual commits.

I can increase dramatically my number of commits by breaking up my commits in very small chunks.

Typically when I am using AI I tend to reduce a lot the scope of a commit to make it more focused and easier to handle.

dbs•4mo ago
No need for evidence of net benefits to get mass adoption. We have mass adoption of digital touchpads in cars despite evidence they are not safe. We have widespread adoption of open spaces despite evidence of them not increasing productivity..
alok-g•4mo ago
>> Our research this year also found that AI can act as a "mirror and a multiplier.” In cohesive organizations, AI boosts efficiency. In fragmented ones, it highlights weaknesses.

This is interesting -- It's helping in some cases and possibly worsening in some others. Does anyone have the details? (I haven't looked into the report as yet.) Thanks.