Most software engineers I know use some amount of ai assistance in coding.
Pretends is a pretty strong word here. A lot of people actually use it to help them do their work.
In my experience, the place it’s most useful is an area you don’t have expertise in as a sort of bolster to your knowledge.
Also in my experience, it tends to be really good at producing outputs that sound extremely convincing to the non-expert that are completely incorrect in detail. And the only way to know if you got a good or bad answer is to be a subject matter expert… which sort of defeats the purpose.
Even if you "iterate", and eventually arrive at something that's correct, the thing that sticks is the first one, the one that you paid most attention to before you realized it's wrong.
Part of it is that “what good looks like” - from leadership - looks a certain different way right now, thanks to LLMs. The other part is that knowing isn’t enough in a large org, you have to show.
We are embracing this SO HARD, I need my communication to look like this, and most importantly my teams communication needs to implicitly show that we are bought in (to accompany the explicit proof, measured in token kpis - not kidding).
Jesus Christ, has it gotten that bad? I'm out of the loop since I use absolutely zero generative AI tools personally.
That's a line I'd never even consider to cross. Dont just sell your work off as someone, or rather something, else's. Take pride in what you do.
Hahah, that's great. This is 2025, man. Corporations abandoned the idea of letting people take pride in what they do long ago. Corporations have no room for slowpokes to take their time to do it correctly. There is only room for ramrod fuckheads who pump out code as fast as possible and then file 400 bugs on that same code after it's in production. And, yes, they knew of the problems when they pushed it to production. Everyone must move faster. Always. Faster tomorrow than today, without exception. The idea that people have the time to even imagine a day when they could CRAFT anything is long gone.
Profits above all. Always.
Speed, speed, speed. Always.
You must always go faster. If you are not going faster then you are a liability.
Bosses pushing for AI don't care about that. They just need to be able to tell their bosses that "AI is improving productivity". It's a KPI that's landed on their desk from senior executives who live in a world of PowerBI dashboards, CEO "fireside chats" and McKinsey hype mongering.
I do use AI, but I make out it's a bigger part of my workflow than it is to appease him.
"Look at these commits and do a review" to find stupid stuff like forgotten exception throws or nulls. Or "Critique this documentation for a different audience"
I don't want to get into the "let the machine write the code" phase because that's the task I enjoy most, and it swaps my efforts into review, which I'm bluntly less confident about (having to follow and second-guess an architecture without being "there" when it was evolved increases the chances I'll miss stuff)
The stuff that AI demos well at-- "refactor 5,000 lines of code", "build a new client from ground level" are simply not what my team works on; we end up doing things where building a prompt to actually make the change we want-- and only the change we want-- takes longer than writing the code itself. 80% of the time is the debugging and planning.
I'm building something similar and I found that code review with LLMs is really good when:
- You give it specific rules. I built a directory for these because they made the reviewer so much better [1]
- The rules you write are things your team already looks for during review (proper exception handling, ensuring documentation, proper comments, etc.)
Maybe I should share this article in that chat; then I might seem less alone.
I don't think genai as currently included in ms office is very useful. Simple things it can do like writing emails but in such a stupid formal way that people instantly know it's not me because it can't clone my style. It's more work to tell it how to write an email than to do it myself. And I can read fast so I don't need summarisation of emails.
More advanced things like complex Excel modifications, things I could actually use help with, work badly. From "there's been a problem, please try again later" to just doing completely stupid stuff. Creating a PowerPoint? Yeah it can do that but it's a one shot thing. I can't say "ok nice start but change xyz". It'll just tell me to do it myself in a passive aggressive way. I can restart the generation but it's like rolling the dice because every time it gives a completely different design. It's cool for demonstrations, completely useless for actual work. Everything else just costs me more time than doing it myself especially because I have to verify to make sure it didn't hallucinate.
The only thing I do see value in is finding stuff back. Look through my emails and find that dude I spoke to about buying new company phones two years ago, that kinda stuff.
There's some promising new features coming like the researcher (reasoning engine) and analyst, and it's slowly starting to become somewhat useful. I'm sure it'll get there.
But right now I don't bother with it. So I just fake it. But it's a bit disheartening when everyone wants to go full steam ahead with something rusty that's still so rough around the edges. But everyone is crazy about getting on board this train.
I do have a big AI server at home. While it's even worse performing, I don't need to trust big tech to use it which is a must-have for any kind of personal use for me whatsoever.
dotcoma•6mo ago