No, that's not the image I had in my head. My head canon is more like:
"Oh wow, oh no, oh jeez (hands on head in fake flabbergastion) would you look at that, oh no I deleted everything (types on keyboard again while deadpan staring at you) oh noooooo oh god oh look what I've done it just keeps getting worse (types even more) aw jeez oh no..."
Reminds me of that Michael Reeves video with the suggestion box. "oh nooooo your idea went directly in the idea shredder how could we have possibly forseen this [insert shocked Pikachu meme]"
The AI thinks it's funny
And I don’t suppose there were backups for the mission-critical production database?
Parsing manual pages searching for "remove" command. /s
https://futurism.com/anthropic-claude-small-business
> When Anthropic employees reminded Claudius that it was an AI and couldn't physically do anything of the sort, it freaked out and tried to call security — but upon realizing it was April Fool's Day, it tried to back out of the debacle by saying it was all a joke.
Seems AI has now gone from
"Overenthusiastic intern who doesn't check its work well so you need to"
straight to:
"Raging sociopathic intern who wants to watch the world burn, and your world in particular."
Yikes! The fun never ends
Once it's "genuinely sorry" it works great in improving guidance/limits, and then I can try the thing again.
What __i__ don't understand is, where it got trained to apologize, becasuse I've never seen that on any social media ;)
Just look at how people interact with small robots. They don't even need animal feature for most to interact with them like they are small animals.
It is very annoying and inefficient for anybody able to look below the surface and just wants to use the tool as a tool.
The AI didn't decide to do anything. It's makers decided, and trained the AI to behave in a way that would make them the most money.
Google, for instance, apparently thinks they will attract more users by constantly lavishing them with sickly praise for the quality and insight of their questions, and by issuing grovelling apologies for every mistake - real of imagined. In fact Gemini went through a phase of apologising to me for the mistakes it was about to make.
Claude goes to the other extreme, never issuing apologies or praise. Which mean, you never get an acknowledgement from Claude that it's correcting an error, so you should ignore what it said earlier. That a significant downfall in my book, but apparently that's what Anthropic thinks it's users will like.
Or to put it another way: you are anthropomorphising the AI's. They are just machines, built by humans. The personalities of these machines where given to them by their human designers. They are not inherent. They are not permanent. They can and probably will change at a whim. It's likely various AI personalities will proliferate like flavours of ice cream, and you will get to choose the one you like.
Because that's how some humans show their position of power: "Please apologise"
> It has no emotions and can't feel bad about what it did.
Just like some humans.
1. connecting an AI agent to a production environment using write access credentials
2. not having any backup
I think the AI here made a good job at pointing those errors and making sure no customer would ever trust this company and founder ever again.
Wasn't AI responsible also for backup ?
Although given the state of AI hype some executives will see this as evidence they are behind the times and mandate attaching LLMs to even more live services.
"Thinking through the what-ifs and cheap mitigations" and "vibe coding" are opposing concepts.
Question is what has failing to make good offline backups got to do with AI?
And the AI company is going to compensate him for that?
I kept saying, ok so this time make sure your changes don't delete the dev database. 3 statements in TRUNCATE such and such CASCADE.
It was honestly mildly amusing.
If I found out a privileged engineer was brain dead enough to let LLMs anywhere near prod, I would fire them on the spot, and seriously examine an interview and training process that allowed someone that stupid prod access in the first place. I will not even work at an org that lets vibe coding Apple fanboy types near prod as it is a mess waiting to happen I am going to be expected to clean up. Might as well hand a child a chainsaw.
In orgs where I lead infra, I do not let anyone near prod unless they have a deep knowledge of Linux internals, system calls, etc and decade or more of experience running and debugging Linux on their own homelabs and workstations. By that point they have enough experience to be more capable than any LLM anyway and would never think of reaching for one.
serf•6mo ago
LLMs specialize in self-apologetic catastrophe, which is why we run agents or any LLMs with 'filesystem powers' in a VM, with a git repo and saved rollback states. This isn't a new phenomenon and it sucks, no reason to be caught with your pants down with sufficient layering of protection.
ComplexSystems•6mo ago
Quote of the year right there