At least personally this was obvious to me years before AI was around. Whenever we had clear data that came to an obvious conclusion, I found that it didn't matter if _I_ said the conclusion, regardless of if the data was included. I got a lot more leeway by simply presenting the data to represent my conclusion and let my boss come to it.
In the first situation the conclusion was now _my_ opinion and everyone's feelings got involved. In the second the magic conch(usually a spreadsheet) said the opinion so no feelings were triggered.
[citation needed]
This entire article is just meaningless vibes of one guy who sells AI stuff.
Bruh either had help, or he's the most trite writer ever.
It's worth noting that much of the frustration stems from expectations.
I don't expect an AI to learn and "update their weights"..
I do however expect colleagues to learn at a specific rate. A rate that I a believe should meet or exceed my company's standards for, uh, human intelligence.
CharlieDigital•1h ago
So in a sense, we are more forgiving of ourselves more than anything.
Grimblewald•17m ago
in fact, in my domain, that's almost always the case. LLM's rarely get it right. Getting something done that would take me a day, takes a day with an LLM, only now I don't fully understand what was written, so no real value add, just loss.
It sure can be nice for solved problems and boilerplate tho.