I've been able to write things in a week that would literally take me six moths previously.
The critics don't know the facts about how much time you can really save because their anti AI so they don't use it so they don't know, but they're happy to tell you some "facts" about how your new found AI assisted productivity is less than you really think it is.
Small teams, more frequent releases, solving problems incrementally, working across the solution rather than having to bridge silos, feeling empowered and safe to try things?
If you already had those things, and took them for granted, then it may be hard for you to estimate the impact they might have had if they weren't present.
Would you still have been able to use AI to build something so much more quickly if your hands were tied by organizational policy?
I'm saying you may have already had 5 boring factors that laid a good groundwork to enable you to get that boost, but you are discounting those factors.
If I'm wrong, then tell me the context - were you in a situation where you didn't have those factors in place?
Or for anyone else reading this, are there resources you have used to learn how to get the most out of LLM coding tools?
you wrote a tic-tac-toe game for android? so what ? what does that prove?
have you seen rails blog demo from 15 yrs ago ?
Writing Python or Javascript? You're in luck! Wrangling some gnarly SQL or complex Rust? Expect a modest boost. Modifying an old Eiffel program? You're on your own.
I work in WinDbg analyzing crash dumps and I'm trying to get better at it so I've been trying to get Claude and Gemini to help me get to the bottom of some memory dumps I have lying around and the experience hasn't been great.
I used Claude yesterday because I wanted to modify my gvim setup to remember the window size and position and after a few tries I finally got what I want (it really wanted to use mksession but that persists way more than window size and that isn't what I wanted).
> I've been able to write things in a week that would literally take me six moths previously.
I don't think you get it. For instance: would you have been able to do that if you were on a team that was not empowered, not self-organising, and you were "afraid to fail ... afraid to try – to make calls, to take initiative, to just f-ing do it"? I doubt it.
I'm curious exactly what that thing was that you "[wrote] in a week, that would have literally take[n you] six moths previously."
Also, what species of moth do you use to measure time?
Don't be silly! The passive, hardworking owners are the ones that deserve all of that profit. Those are just the new expectations. He shouldn't be greedy, and should be happy that he just gets to keep his job.
Author hand waves at "organizations not wanting to open a can of worms" when I wanted to examine each squirmy helminth.
The point is that these practices imply other practices and propagate their own culture. It's simple and not new but still unreasonably effective.
Is this true? A more established / larger organization often has higher revenue per employee.
I guess value is subjective but from a pure economic standpoint I think the F100 employee wins?
Nvidia is 2xing a year right now so a smaller team / startup would have to match at least that. Certainly some will but enough to make the point valid?
I think scope likely plays a big role here.
Like "two pizza team"s of Amazon.
https://chatgpt.com/share/682ddb36-50f4-8004-b54d-3e41a10ab8...
The article is about the effect of these tools on an organization. If your org isn't doing these 5 things and thinks "adding AI" will finally make them more productive than ever... they might see modest gains but the article is claiming it won't be as big as if they'd used these practices.
It's hard to measure which has more impact: changing your management style and organization structure or using AI.
I'm willing to bet they both have some impact. From experience I believe the former has a bigger impact. But I'm not sure it's true industry-wide.
With that in mind we can say that:
> Smaller teams are better value/$ spent
Feedback loop: Reduce the communication channels
Cognitive load: A better defined responsibility area
Flow state: Tasks are more coherent instead of being splited ad infinitum
> More frequent releases accelerate learning what has real value
Feedback loop: self evident
Cognitive load: Easier to justify dropping features.
Flow state: Alignment with the user needs.
> Limiting work in progress – solving one problem at a time – increases delivery throughput
Feedback loop: progression transparency
Cognitive load: self evident
Flow state: self evident
> Cross-functional teams experience fewer bottlenecks and blockers than specialised teams
Feedback loop: Easier to address blockers
Cognitive load: No need to resort to inter-teams communication.
Flow state: Easy to offload part of the problem to close teammates
> Empowered, self-organising teams spend less time waiting for decisions and more time getting sh*t done
Feedback loop: No need to wait for management decisions and policies
Cognitive load: Experiments are easier when there's no red tapes
Flow state: Natural workflow due to the above.
[0] https://cacm.acm.org/practice/devex-what-actually-drives-pro...
This. Juggling 17 tasks is not a badge of honor. It just shows the world that you have neither the focus to prioritize nor the talent to execute them efficiently. Strange flex, bro.
Put another way, nobody cares about your TODO list, everybody cares about what you've actually shipped. So actually ship things, one at a time.
- More frequent releases accelerate learning what has real value [No improvement]
- Limiting work in progress, solving one problem at a time,increases delivery throughput [Continued]
- Cross-functional teams experience fewer bottlenecks and blockers than specialised teams [Confirmed]
Empowered, self-organising teams spend less time waiting for decisions and more time getting sh*t done [Confirmed]
Additionally, smaller teams 1-3 engineers per project who are empowered are much happier. Side effect was time spent on process, tickets, communication dropped dramatically. Time spent on creating and confirming increased.
In a large organization solving your own blockers can be the difference between releasing next week and releasing next quarter. More frequent releases only help in a business where users adopt new features quickly.
aqme28•5h ago
apwell23•5h ago
any decent software developer uses and creates abstractions instead of generating reams of code using AI.
From what i've seen at work AI is "game changer" for coding in worst sense. reams and reams of duplicated code that looks slightly different from other generated code doing similar things. Before AI ppl used to stop and create some library now they just generate shit because its so easy. AI is death of software engineering.
righthand•5h ago
My manager probably adds 10+ hours to my week by pushing Llm code at our projects only to follow up with several merge requests to fix his work. I just approve whatever he pushes because he isn’t interested in actually solving the problem. He’s interested in seeing if he can fiddle the solution out of an Llm. Each time it involves me telling him the answer. His boss is the same way. Literally dragging the company efficiency down and proving the efficiency gains are meaningless.
apwell23•5h ago
righthand•4h ago
sebstefan•5h ago
> I'd like to output lines where .stack_trace is non empty with JQ
I vaguely remembered that it sometimes has "null" and sometimes has empty strings
Time gained ~60s looking at the doc
> Here is a Jira ticket's description: ```....``` Please rephrase this more clearly and make the text flow better?
^ Then I picked and chose the improvements
No time gained but quality improved
> Critique this for accuracy: (A long comment about the properties of randomness)
No time gained but quality improved
> How do I make excel prompt me to select the goddamn delimiter when I open a CSV file instead of just picking a random fucking one that never works
Question was filtered due to content policy, because apparently they don't want you to offend the robot
apwell23•5h ago
> real game changers are “chat-with-codebase” or agentic development tools
sebstefan•4h ago
>From what i've seen at work AI is "game changer" for coding in worst sense
neogodless•5h ago
llm_nerd•5h ago
RobKohr•5h ago
But this article is on point. All of things listed are more impactful than LLMs
ramses0•4h ago
For certain projects I'd used `vscode` (with the vim plugin!), and there's definitely some helpful bits. The biggest helper for me is/was the `F2-rename-symbol` capability. Being able to "rename" securely in the whole file (or function), and across the project is super-useful.
Working with Cursor and the autocomplete is (often) pretty shockingly good. eg: when I go to rename `someVar` to `someOtherVar`, it'll prompt to `<tab>` and:
In vim, I'd `*` to automatically search for `someVar`, then `cwsomeOtherVar`, (change-word), then `n.n.n.` (next, repeat, etc.)...so my overhead (by keystrokes) is `*` (search), `cw` (change word), (`n.`) next-and-change. Five "vim" characters, and I mentally get to (or have to) review each change place.
In straight `vscode`, I can do `F2-rename` and that'll get me replace _some_ of the variables (then I still have to rename the log lines, etc).
With Cursor, I make the `cw...` and it's 90%+ accurate in "doing what I probably also want to do" with the single `<tab>` character.
It gets even more intriguing where you'll say `s/foo/fooSorted/` and it automatically inserts the `\*.sort()` call, or changes it to call `this.getFooSorted()` or `this.getSorted( foo )` or whatever.
For "cromulent" code, cursor autocomplete is "faster than vim". For people that can't type that good, or even that can't program that good, it's a freaking god-send. Adding in the `Agent...` capabilities (again, for "cromulent" code)... if you're just guiding it along: "Now, add more tests" => "Now 50% more cowbell!" => "Whoops, that section would be more efficient if you cached stuff outside the loop."
Even then, I have to have some empathy with the AI/Agent coding, "Hey... you messed up that part (btw, I probably would have messed up that part the first time through as well...)". We can't hold them to gold standards that we wouldn't meet either, but treating them as "helpful partners" really reduces the mental burden of typing in EVERY SINGLE CHARACTER by yourself.
ratrocket•3h ago
The line in my nvim config that sets up the LSP "rename" is:
I mention this because in spite of using vim/neovim for over 20 years, I still learn new things (from HN comments and elsewhere) -- which is part of why I love it.(To your larger point -- I concur... btw)
ramses0•3h ago
It was super nice to have "jump-to-definition", but the vim plugin in vscode is very nice (missing a few things, but even `<c-w>hjkl` "does the right thing(!) so they're really trying).
I haven't leaped to nvim (yet?), and the fact that vscode kindof "just works" has prevented me from chasing LSP support or setting up the more "advanced" features, but thanks for sharing!
I'd be fantastical if something similar to that cursor autocomplete were available in a "real" cli-vim. There's things that really bug me about the "hover suggestions" (eg: can't always tell which characters/lines are "real" or "suggested", especially with auto-closing double-quotes suggestions), but when I occasionally drop to a terminal vim for "accurate" editing, I really do find myself missing like "I should just be able to tab-complete the rest of these edits..." and I don't know how to express that in an appropriate "editor" context?
Maybe lean on like a `vimdiff` representation, where you could `:vsplit $ASSISTANT` and accept suggested diffs? (Hmmmmm....)
cess11•1h ago
klabb3•5h ago
closewith•5h ago
Whilst I agree, the mainstream is also where the vast majority of software development occurs. CRUD apps and enterprise workloads, etc.
Saying that current LLMs are only useful for the mainstream is saying that they're incredible useful.
ldjkfkdsjnv•4h ago
c0brac0bra•4h ago
Now I feel like there's probably other workflows out there that I'm ignorant to that could be better, but keeping up feels impossible. Is there a particular approach/tool that you're finding to be really beneficial?
ldjkfkdsjnv•3h ago
1. Build huge prompt with Repo Prompt (500k+ tokens)
2. Ask for gemini pro to summarize the task using above prompt
3. Hand off coding to codex (openai)
The above flow replaces a junior developer
apwell23•2h ago
can you demo this flow on a open source issue on a repo like pytorch ?
I am curious to see how many issues this junior developer can close.
ldjkfkdsjnv•23m ago