If I have a bug reported and I’m not sure where it is, pasting the bug report into an LLM and asking it to find the bug has yielded some mixed results but ultimately saved me time.
I use AI more for reading than writing code.
Wouldn’t you loose a bit of that brain power if you stop to make those connections yourself while trying to understand those code sections?
I still have to relay on my own wits to read the most complicated code.
I don’t spend less time reading code. I just read more code.
Really depends on your perspective. For some executives, a "good" use case may be the equivalent of burning goodwill to generate cash: push devs to use AI extensively on everything, realize a short term productivity bump while their skills atrophy (by haven't atrophied yet), then let the next guy deal with the problem of devs that have fully realized the "negative effects of relying to heavily on AI."
The author grazes something I’ve been thinking about for a while while watching LLMs evolve along with their uses: will this tool result in more significant work being accomplished or just more…work in general.
By that I mean it accommodates completing small well documented projects well but seems to flounder on larger more meaningful work.
We already have a problem with junior/mid tier knowledge workers not scoping their efforts effectively and just doing work for works sake. Will the ease of completing small but ultimately useless work result in more of this?
Not a jab at LLMs really. More our propensity to miss the forest in our rush to view a tree.
bitmasher9•1d ago
Reading this feels like only getting half of the inside jokes of a friend group.
rage4774•1d ago