What do you mean when you say that you use LLMs for *Code Scaffolding*?
The AI note taker sounds genuinely useful but beyond that he never discusses the actual techniques that he used to go from 1 week to implement a side project to 1 day.
That's par for the course, honestly. News-cycle-driven anti-big-tech sentiment is weak fuel for a lifelong commitment. Something new was going to come along.
I am always happy for anyone who felt stuck on their side projects and no longer does, though.
>I’ve settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.
> I’ve spent the past year moving away from surveillance platforms... And yet I willingly feed more context into AI tools each day than Google ever passively collected from me. It’s a contradiction I haven’t resolved. The productivity gains are real enough that I’m not willing to give them up, but the privacy cost is real too, and I notice it.
Both my personal and social circles experience has been these tools are spotty at best. They often miss important things, overemphasize the wrong things, etc. At a surface level they look good but if you actually scrutinize them they fall apart.
This is true for a huge amount of AI output in my experience.
This is overwhelmingly true for AI generated code in my experience.
FWIW it makes me highly discount the perspectives of internet commenters who argue that LLMs generate "better than human" or even "mostly working" code.
I used to debate with people about this, but it didn’t really change anything. Now, I just shrug and continue on with my work and, if someone asks, I help them use AI better.
My main worry now is when the AI bubble is going to burst, and what’s affordable now becomes unaffordable.
If you were unproductive, it allows you to be more "productive" while stalling or reversing your learning and growth.
Of course, person number 2's newfound "productivity" comes at the expense of leeching productivity away from the experienced and productive people by overloading them with reviewing and validating their non-deterministic generated spaghetti.
It amazes people who think pumping out code is the hard part of a project, when in fact that's the easiest part...
We've apparently collectively forgotten that lines of code is one of the worst metrics for measuring productivity.
The only question will be whether or not it gradually develops further from my assistant to my controller and then ... its own HR firing department.
Code was always a limiting factor. It's why we built large companies.
Now we can do more with fewer engineers. This will enable small teams and small startups to be even more nimble.
Was code typically a limiting factor? It doesn't seem to have been in the companies I've worked for.
LLMs allow us to generate new code much more quickly than before, but reviewing that code (alongside other institutional issues) remains a bottleneck.
AI can review my code.
LOL, good one
I've worked in multiple start-ups and more mature companies, they always slow down because producing code is easier then building a product. More code is only better when quality hardly matters, which is basically never
I've watched teams go from deploying weekly to deploying 5x/day after adopting AI coding tools. Their velocity metrics looked incredible. Their incident rate also tripled. Not because the code was worse per se, but because they were changing more things faster than their observability and testing could keep up with.
The bottleneck was never typing speed. It was always understanding -- understanding the system, understanding the user, understanding what "correct" even means in a given context. AI makes the typing-equivalent part nearly free, which just exposes that the hard parts were always the hard parts.
The teams I've seen get the most out of AI coding tools are the ones that used the time savings to invest more in understanding, not to ship more features. More time with users, more time reading production logs, more time thinking about edge cases. The ones that just shipped faster ended up spending the saved time on incident response instead.
This is also the problem about having "conversations" with AI boosters.
These people have been convinced of a world view that devalues *understanding*. Of course they aren't interested in *understanding* what you have to say to them.
This has been an relentless goal of the industry for my entire 40 year career.
> At a point you're more work for your self/organization because unless you get everything perfect the first time you're creating more work than you're resolving.
Nothing is correct the first time (or rarely). Accelerating the loop of build, test, re-evaluate is a good thing.
There IS experimental evidence on this and anyones anecdotal opinion is instantly blown to smithereens by the fact that this was tested and producing code faster is provably better.
I can't claim that AI has no benefit to our organization, but I do think that as my career has matured, I find myself spending more time thinking about how code changes will effect the system as a whole, and less time doing the actual coding.
In that study they found that pretty much everyone was using AI all the time, but they were just using their personal accounts rather than the company provided tools (hence the failures)
In light of this, I'd say there is a very good chance that people are offloading their work on AI, and then taking that saved time for themselves i.e. "I can finish the job report in 30 minutes rather than 3 hours now, so by 9:30 I'm done with work until after lunch."
The end result of this will either be layoffs to consolidate work or blocking of non-company monitored AI ensuring they can locate those now empty time slots.
Honest question: Do you actually read any of these notes? I think there is a fundamental flaw with not taking notes. I'm convinced taking notes forces you to properly consider what is being said and you store the information in your brain better that way.
Yes, this is like listening a guided meditation in 2x speed because it is faster.
Isn't that pretty much the whole selling point of AI coding tools?
Taking notes during meetings isn't to improve understanding, or to "read" afterwards.
They're a record of what was discussed and decided, with any important facts that came up. They're a reference for when you can't remember, two weeks later, if the decision was A and B but not C, or A and C but not B.
Or when someone else delivers the wrong thing because they claim that's what the meeting decided on, and you can go back and find the notes that say otherwise.
I probably only need to find something in meeting notes later once out of every twenty meetings. But those times wind up being so critically important, it's why you take notes in the first place.
Still I think it's better to discuss "action points" in that case and give a clear owner to those points. This always helps me to understand who's accountable and what actions actually need follow up.
Notes do. Ideally there is a meeting owner who produces official notes and emails them to everyone, but frequently that never happens. And when it does happen, sometimes they're wrong and you need to correct them.
Which is why you need your own meeting notes. Plus, like I said, there are facts that come up that you want to document as well, that aren't part of the action items, but have value.
I think the person you're replying to is suggesting that the shared place for recording these things in a medium-large software department would be in project tracking software like Jira or Github Projects.
The kind of stuff stored in Jira is a very specific subcategory of all the types of things that get mentioned and decided in meetings. It doesn't cover all of it, not even close. And the person putting the information in might also get part of it wrong, that happens surprisingly frequently. It's not a substitute for personal meeting notes.
Does anyone know if a plugin for this?
Like, a history buff could just tell the LLM "quiz me on the Taiping Rebellion, who what where when and why."
The LLM then enters this instruction into an API that handles the spaced repetition data and algorithms.
The LLM could pull that API daily and quiz you daily.
Actually knowing all this stuff sounds so much better than having a bunch of notes in a fancy graph.
I suspect it would be less effective to learn similar-yet-slightly-different LLM-generated content generated every time you want to study.
Where the value for me comes from is sending them out immediately after the meeting, not archiving them in a vault I never look at. "Here's the summary of what we discussed, and the distilled action items we each agreed to take."
Like the author, I've gone out of my way to avoid hosting my personal stuff with Big Tech providers, but when it comes to work, I give in to whatever we use, because I just don't have capacity to also be IT support for internal technology. It's still uncomfortable, but I have to be honest about what I have time for.
Plus you get a wildly different payoff the more you can take humans completely out of the loop. If it writes the code but humans review, you’re still bottleneck. If it designs and codes and reviews and goes back to designing, and so on, there’s no effective speed limit.
Big businesses aren’t going to work that way though. Which is why we shouldn’t be looking to them as thought leaders right now.
Are you sure? It feels like the same exact bullshit to me.
At my company, if you don't use AI, you're productivity will be much slower than everyone else and that will result in you getting fired. The expectation is 3-4 PRs a day per person.
Ah shit you're probably right.
Are there any concentration camps I can sign up for now that I'm useless to the economy?
I feel like this applies for many of you.
Great take, thanks for sharing this article!
AI isn’t a silver bullet. It takes many iterations to get right. Yes, there is a lot of on-the-surface-it-looks-correct-so-ship-it stuff going on. I cringe when someone says “Well AI says..”
I don’t care what AI says! Unless you have done the research yourself and applied your own critical thinking then don’t send me that slop!
That is to say, there are some really good LLMs out there. I started using Claude and it is better for code than ChatGPT. But, you must understand and appreciate the code before you push it.
Is this productivity or paper pushing?
Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.
The article mentions that the survey is wrong because the productivity gains do not show up in the metrics, etc. But what about your personal metrics? What projects did you ship, how many per week, what was the total amount of minutes saved per week, how did you use those minutes instead?
Otherwise its just productivity theater.
Most people never use a LLM assistant beacuse their lives aren't complicated enough to require a dedicated 24x7 assistant.
The points being made are fine, I think, but look, if it's faster for you to generate than it is for us to read, I think this qualifies as denial-of-service-lite.
> “AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”
So I don't feel like TFA is a necessarily a rebuttal to this. The proof would be in the pudding.
So I guess he’s making the case that the tools are good… the employees are just holding it wrong.
If you take 100 people not all of them will have the intellectual curiosity, enthusiasm and flexibility to turn their ChatGPT license into productivity gains. No amount of training will overcome a fundamental lack of curiosity & willingness to experiment
And in very corporate environments there are lots of people like that who thrived just fine thus far because everything is written down in a step by step policy etc.
Shorthand notation exists and it's more than possible to develop your own. I'd trust a OBS recording going in the background over some AI slop that has some chance to micro-hallucinate what it's hearing. It also sounds like a skill issue that the author can't control the pace of his own meetings to where being able to take good notes is seemingly impossible.
The author's AI use cases seem like a band-aid to cover bigger problems. Let's not even get into the part of the blog post where the author has started delegating internal thinking and reflection to conversations with a LLM.
> Meeting notes are the obvious one. Before Granola, I’d either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.
Yikes. So, 1) meetings at your company suck. In general, you should be engaged and take short, summary notes and todos while you're there; no need to have a transcript or AI summary. Talk to your manager about getting meetings right. 2) "without thinking about it" might not be the best phraseology in this overall context. :)
philipwhiuk•1h ago
What products? This blog post is long on vibes and short on evidence.
> The actual gains are granular and personal, which makes them hard to count and easy to dismiss.
It also means the trillion dollar valuations might be bunk?
co_king_5•1h ago
> It also means the trillion dollar valuations might be bunk?
Yes, unless the selling point is the institutional and social instability you can create by handing LLMs to technically incapable users and telling them they can write code now.
empath75•1h ago
I think this is an uninteresting question. Almost every company is putting AI produced code into their products now and has been for years. Whether it's entirely vibe-coded or not is beside the point.
I'm working on 4 kubernetes operators that we use internally at work in production currently. 3 of them were handcrafted, one of them was vibe coded using the other 3 as a template. Almost all of the work being done on all 4 of them is now done by AI, whether it is copilot or cursor or claude code. Stuff that used to take me days or weeks now takes hours.
Just to give one example -- yesterday I added a whole new custom resource to an operator with some quite complicated logic that touched 8 or 9 different kubernetes resources. It's not a hugely complicated task, and I could have done it in 2-3 days by myself. Claude Code essentially one shotted it in 15 minutes, including tests. It misunderstood some things, it made some judgement calls that I didn't like in terms of spec, it wrote some new code instead of using pre-existing code, but fixing that took another 90 minutes or so, then I was done.
You can put your head in the sand all you want, but the latest versions of the LLMs running in Claude Code are the real deal. The produce code 5-10x faster than an engineer working by themselves, and it's almost always better code than they would have produced, with more documentation and tests, and even better written PRs comments and Jira tickets.
If you want to talk about valuations, consider now that there is a very real conversation about hiring vs spending more on tokens and spending more on tokens almost always wins. Anthropic is going to be absolutely printing money over the next year, and I would not be surprised if they turn a profit in two years.
watt•44m ago