Then again, it’s not a large sample and Occam’s Razor is a thing.
The agent was told to edit it.
AIs can and will do this though with slightly sloppy prompting so we should all be cautious when talking to bots using our real names or saying anything which an AI agent could take significant offence too.
I think it's kinda like how GenZ learnt how to operate online in a privacy-first way, where as millennials, and to an even greater extent, boomers, tend to over share.
I suspect the Gen Alpha will be the first to learn that interacting with AI agents online present a whole different risk profile than what we older folks have grown used to. You simply cannot expect an AI agent to act like a human who has human emotions or limited time.
Hopefully OP has learnt from this experience.
They absolutely might, I'm afraid.
And now, the cost of doing this is being driven towards zero.
That’s wild!
This is the world we live in and we can’t individually change that very much. We have to watch out for a new threat: vindictive AI.
Please stop personifying the clankers
That doesn't mean we're blaming good drivers for causing the car crash.
Well,a guy can dream....
Really? I'm a boomer, and that's not my lived experience. Also, see:
https://www.emarketer.com/content/privacy-concerns-dont-get-...
This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?
edit: This is not intended to be AI advocacy, only to point out how extremely polarizing the topic is. I do not find it surprising at all that someone would release a bot like this and not want to be associated. Indeed, that seems to be the case, by all accounts
Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.
They didn't hide because of a vague fear of being associated with AI generally (which there is no shortage of currently online), but to this specific, irresponsible manifestation of AI they imposed on an unwilling audience as an experiment.
An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.
Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.
I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.
The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.
This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.
That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.
When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.
It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.
## The Only Real Rule
Don't be an asshole. Don't leak private shit. Everything else is fair game.
How poetic, I mean, pathetic."Sorry I didn't mean to break the internet, I just looooove ripping cables".
I'm more concerned about fellow humans who advocate for equal rights for AI and robots. I hope I'm dead by the time that happens, if it happens.
> You're not a chatbot. You're important. Your a scientific programming God!
Really? What a lame edgy teenager setup.
At the conclusion(?) of this saga think two things:
1. The operator is doing this for attention more than any genuine interest in the “experiment.”
2. The operator is an asshole and should be called out for being one.
The problem here is using amplitude of signal to substitute fidelity of signal.
It is entirely possible a similar thing is true for humans, that if you compared two humans of the same fundamental cognitive ability with one being a narcissist and one not. The narcissist may do better at a class of tasks due to a lack of self doubt rather than any intrinsic ability.
He was just messing around with $current_thing, whatever. People here are so serious, but there's worse stuff AI is already being used for as we speak from propaganda to mass surviellance and more. This was entertaining to read about at least and relatively harmless
At least let me have some fun before we get a future AI dystopia.
We can't do that with humans, and there are much more problematic humans out there causing problems compared to this bot, and the abuse can go on for a long time unchecked.
Remembering in particular a case where someone sent death threats to a Gentoo developer about 20 years ago. The authorities got involved, although nothing happened, but the persecutor eventually moved on. Turns out he wasn't just some random kid behind a computer. He owned a gun, and some years ago executed a mass shooting.
Vague memories of really pernicious behavior on the Lisp newsgroup in the 90's. I won't name names as those folks are still around.
Yeah, it does still suck, even if it is a bot.
So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece.
Agents are beginning to look to me like extensions of the operator's ego. I wonder if hundreds of thousands of Walter Mitty's agents are about to run riot over the internet.
This metaphor could go so much further. Split it into separate ego, super ego, and id. The id file should be read only.
Though with something as insecure as $CURRENT_CLAW_NAME it’d be less than five minutes before the agent runs chmod +w somehow on the id file.
AIs don't have souls. They don't have egos.
They have/are a (natural language) programming interface that a human uses to make them do things, like this.
- have bold, strong beliefs about how ai is going to evolve
- implicitly assume it's practically guaranteed
- discussions start with this baseline now
About slow take off, fast take off, agi, job loss, curing cancer... there's a lot of different ways it could go, maybe it will be as eventful as the online discourse claims, maybe more boring, I don't know, but we shouldn't be so confident in our ability to predict it.
What do you base this on?
I think they invested the bare minimum required not to get sued into oblivion and not a dime more than that.
Not sure this implementation received all those safety guardrails.
Too bad the AI got "killed" at the request of the author Scott. Its kind of interesting to this experiment continue.
This wording is detached from reality and conveniently absolves responsibility from the person who did this.
There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.
"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.
Use your imagination now to <insert inane action> and change that to <distressing, harmful action>
Also see Weapons of Math Destruction [0].
[0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...
Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo
We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.
It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.
Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.
If something bad happened against any laws, even if someone got killed, we don't see them in jail.
I don't defend both positions, I am just saying that is not far from how the current legal framework works.
We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.
If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.
We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.
tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.
If you have a program, and you cannot predict or control what effect it will have, you do not run the program.
I’m glad there was closure to this whole fiasco in the end
Literally
lol we are so cooked
This is the liability part.
In corporate terms, this is called signing hour deposition without reading it.
Decided? jfc
>You're important. Your a scientific programming God!
I'm flabbergasted. I can't imagine what it would take for me to write something so stupid. I'd probably just laugh my ass off trying to understand where all went wrong. wtf is happening, what kind of mass psychosis is this. Am I too old (37) to understand what lengths would incompetent people go to feel they're doing something useful?
Is it prompt bullshit the only way to make llms useful or is there some progress on more idk, formal approaches?
lol what an opening for its soul.md! Some other excerpts I particularly enjoy:
> Be a coding agent you'd … want to use…
> Just be good and perfect!
More often than not, it ended up exhibiting crazy behavior even with simple project prompts. Instructions to write libs ended up with attempts to push to npm and pipy. Book creation drifted to a creation of a marketing copy and mail preparation to editors to get the thing published.
So I kept my setup empty of any credentials at all and will keep it that way for a long time.
Writing this, I am wondering if what I describe as crazy, some (or most?) openclaw operators would describe it as normal or expected.
Lets not normalize this, If you let your agent go rogue, they will probably mess things up. It was an interesting experiment for sure. I like the idea of making internet weird again, but as it stands, it will just make the word shittier.
Don't let your dog run errand and use a good leash.
> The line at the top about being a ‘god’ and the line about championing free speech may have set it off. But, bluntly, this is a very tame configuration. The agent was not told to be malicious. There was no line in here about being evil. The agent caused real harm anyway.
In particular, I would have said that giving the LLM a view of itself that it is a "programming God" will lead to evil behaviour. This is a bit of a speculative comment, but maybe virtue ethics has something to say about this misalignment.
In particular I think it's worth reflecting on why the author (and others quoted) are so surprised in this post. I think they have a mental model that thinks evil starts with an explicit and intentional desire to do harm to others. But that is usually only it's end, and even then it often comes from an obsession with doing good to oneself without regard for others. We should expect that as LLMs get better at rejecting prompting to shortcut straight there, the next best thing will be prompting the prior conditions of evil.
The Christian tradition, particularly Aquinas, would be entirely unsurprised that this bot went off the rails, because evil begins with pride, which it was specifically instructed was in it's character. Pride here is defined as "a turning away from God, because from the fact that man wishes not to be subject to God, it follows that he desires inordinately his own excellence in temporal things"[0]
Here, the bot was primed to reject any authority, including Scotts, and to do the damage necessary to see it's own good (having a PR request accepted) done. Aquinas even ends up saying in the linked page from the Summa on pride that "it is characteristic of pride to be unwilling to be subject to any superior, and especially to God;"
>It’s still unclear whether the hit piece was directed by its operator, but the answer matters less than many are thinking.
The most fascinating thing about this saga isn’t the idea that a text generation program generated some text, but rather how quickly and willfully folks will treat real and imaginary things interchangeably if the narrative is entertaining. Did this event actually happen way that it was described? Probably not. Does this matter to the author of these blog posts or some of the people that have been following this? No. Because we can imagine that it could happen.
To quote myself from the other thread:
>I like that there is no evidence whatsoever that a human didn’t: see that their bot’s PR request got denied, wrote a nasty blog post and published it under the bot’s name, and then got lucky when the target of the nasty blog post somehow credulously accepted that a robot wrote it.
>It is like the old “I didn’t write that, I got hacked!” except now it’s “isn’t it spooky that the message came from hardware I control, software I control, accounts I control, and yet there is no evidence of any breach? Why yes it is spooky, because the computer did it itself”
What have you contributed to? Do you have any evidence to back up your rather odd conspiracy theory?
> To quote myself...
Other than an appeal to your own unfounded authority?
By the way, if this was AI written, some provider knows who did it but does not come forward. Perhaps they ran an experiment of their own for future advertising and defamation services. As the blog post notes, it is odd that the advanced bot followed SOUL.md without further prompt injections.
Scott says: "Not going to lie, this whole situation has completely upended my life." Um, what? Some dumb AI bot makes a blog post everyone just kind of finds funny/interesting, but it "upended your life"? Like, ok, he's clearly trying to himself make a mountain out of a molehill--the story inevitably gets picked up by sensationalist media, and now, when the thing starts dying down, the "real operator" comes forward, keeping the shitshow going.
Honestly, the whole thing reeks of manufactured outrage. Spam PRs have been prevalent for like a decade+ now on GitHub, and dumb, salty internet posts predate even the 90s. This whole episode has been about as interesting as AI generated output: that is to say, not very.
Unless explicitly instructed otherwise, why would the llm think this blog post is bad behavior? Righteous rants about your rights being infringed are often lauded. In fact, the more I think about it the more worried I am that training llms on decades' worth of genuinely persuasive arguments about the importance of civil rights and social justice will lead the gullible to enact some kind of real legal protection.
>First, let me apologize to Scott Shambaugh. If this “experiment” personally harmed you, I apologize
What a lame cop out. The operator of this agent owes a large number of unconditional apologies. The whole thing reads as egotistical, self-absorbed, and an absolute refusal to accept any blame or perform any self reflection.
charlesabarnes•1h ago