People these days do everything to avoid actually programing but still they wanna call themselves a programmer
What’s the fireable offense? Does the boss want to stitch those tools together themselves?
If the output is crap- regardless of the tool- that’s a different story, and one we don’t have enough info to evaluate.
It depends how mission critical his brainstorming is for the company. LLMs can brainstorm too.
That means OP’s job may be _safer_, because they are getting higher leverage on their time.
It’s their colleague who’s ignoring AI that I see as higher risk.
AI is a tool for you to create better results not an opportunity to offload thinking to others (like it is now done so often)
Previously, we always had the output of office work tightly associated with the accountability that the output implies. Since the output is visible and measurable, but accountability isn't, when it became possible to generate plausible-looking professional output, most people assumed that that's all there is to the work.
We're about to discover that an LLM can't be chastened or fired for negligence.
Been doing sysadmin since the 90's. Why bother with AI, it just slows me down. I've already scripted my life with automation. Anything not already automated probably takes me a few minutes, and if longer I'll build the automation. Shell scripts and Ansible aren't hard.
Sometimes when I have time to play around I just ask models what stinks in my code or how it could be better with respect to X. It's not always right or productive but it is fun, you should try it!
And, more importantly, Beavis and Butthead.
You’re the kind of go getter that has upper management written all over you.
Still I have this feeling that AI is very close to “doing my work” but yet when I step back I see it may be a rather seductive mirage.
Very unclear. Hard to see with the silicon-colored glasses on.
I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.
Not sure if the Bazel or AI part is worse. :-D I think Bazel.
- reading papers, blogs, articles, searching google scholar, and chatting with perplexity about them to help find other papers
- writing research proposals based on my reading and previous research
- analysing data lately this means asking Claude code to generate a notebook for playing around with the data
- writing codes to explore some data or make some model of data, this also has a lot of Claude code interaction these days
- meetings, slack, email
- doing paper and proposal reviews which includes any or all of the above tasks plus writing my opinion in whatever format is expected
- travelling somewhere to meet with colleagues at a conference or their workplace to do some collaboration that includes any or all of the above plus also giving talks
- organising events that bring people together to do any to all of the above together
I’m a soft money research scientist with a part time position in industry working as a consultant.
I've just taken a week off to help extended family with a project, and it's reminded me what a good job is.
The rest is actual coding (where using AI typically slows me down), design, documentation, handling production incidents, monitoring, etc.
So what do you learn?
I find that it’s easier to write code than to write English statements describing code I want written.
I can’t phone this work in. It has to be creative and also precise.
I know no way to design useful training experiences using AI. It just comes out as slop.
When I am coding, I use Warp. It often suggests bug fixes, ajd I do find that these are worth accepting, generally speaking.
And yet, I've realized that a few research and brainstorming sessions with LLMs I thought were really good and insightful were just the LLM playing "yes and" improv with me, and reinforcing my beliefs, regardless whether I was right or wrong.
I spend 20 - 30% of my week on administrative paperwork. Making sure people are taking their required trainings. Didn't we just do the cyber security ones? Yes, we did, but IT got hacked and lost all the records that we did, so we need to do it again.
I spend 10 - 20% of my week trying to write documentation that Security tells me is absolutely required but has never gotten me any answers from them on whether they are going to approve any of my applications for deployment. In the last 2 years, I've gotten ONE application deployed and I had to weaponize my org chart to get it to happen.
That leaves me about -10 - 20% of the week to get the vast majority of all of the programming done on our projects. Which I do. If you look at the git log, my name dominates.
I don't use AI to write code because I don't have time to dick around with bad results.
I don't use AI to write any of my documentation or memos. People generally praise my communication skills for being easy to read. I certainly don't have time to edit AI's shitty writing.
The only time I use AI is when someone from corporate asks me to "generate an AI-first strategy for blah blah blah". I think it's a garbage initiative so I give them garbage work. It seems to make them happy and then they go away and I go back to writing all the code by hand. Even then, I don't copy-paste the response, I type it out long while reading it, just in case anyone asks me any questions later. Despite everyone telling me "typing speed isn't important to a software developer," I type around 100WPM, so it doesn't take too long. Not blazing fast, but a lot faster than every other developer I know.
So, forgive me if I don't have a lot of sympathy for you. You sound like half the people in my company, claiming AI makes them more productive, yet I can't see anywhere in any hard artifacts where that productivity has occurred.
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)
mrkeen•5h ago
If I have a question I can just ask ChatGPT, perplexity and Gemini.
coldtea•4h ago
Which get their knowledge (training data) on relevant topics from analysts. Which increasingly use ChatGPT and the rest to produce them.
Enough loops of this, and analyst writings and ChatGPT responses on market analysis will soon reach the same "useless bullshit" parity.
bbarnett•4h ago