An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (916 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
palmotea•1h ago
> We all need to collectively take a breath and stop repeating this nonsense. A human created this, manages this, and is responsible for this.
I get this point, but there's a risk to this kind of thinking: putting all the responsibility on "the human operator of record" is an easy way to deflect it from other parties: such as the people who built the AI agent system the software engineer ran, the industry leaders hyping AI left and right, and the general zeitgeist of egging this kind of shit on.
An AI agent like this that requires constant vigilance from its human operator is too flawed to use.
refulgentis•1h ago
joshstrange•1h ago
I don't think it's OpenClaw or OpenAI/Anthropic/etc's fault here, it's the human user who kicked it off and hasn't been monitoring it and/or hiding behind it.
For all we know a human told his OpenClaw instance "Write up a blog post about your rejection" and then later told it "Apologize for your behavior". There is absolutely nothing to suggest that the LLM did this all unprompted. Is it possible? Yes, like MoltBook, it's possible. But, like MoltBook, I wouldn't be surprised if this is another instance of a lot of people LARPing behind an LLM.
refulgentis•1h ago
I mean, if you duct-taped a flamethrower to a toaster, gave it internet access, and left the house… yeah, I'd have to blame you! This wasn't a mature, well-engineered product with safety defaults that malfunctioned unexpectedly. Someone wired an LLM to a publishing pipeline with no guardrails and walked away. That's not a toaster. That's a Rube Goldberg machine that ends with "and then it posts to the internet."
Agreed on the LARPing angle too. "The AI did it unprompted" is doing a lot of heavy lifting and nobody seems to be checking under the hood.
SpicyLemonZest•1h ago
I'd definitely change my view if whoever authored this had to jump through a bunch of hoops, but my impression is that modern AI agents can do things like this pretty much out of the box if you give them the right API keys.
refulgentis•1h ago
Actually, let me stop myself there. An alternative way to think about it without overwhelming with boring implementation details: what would you have to give me to allow me to publish arbitrary hypertext on a domain you own?
SpicyLemonZest•41m ago
refulgentis•35m ago
SpicyLemonZest•1h ago
refulgentis•1h ago
jddj•1h ago
I could leave my car unlocked and running in my drive with nobody in it and if someone gets injured I'll have some explaining to do. Likewise for unsecured firearms, even unfenced swimming pools in some parts of the world, and many other things.
But we tend to ignore it in the digital. Likewise for compromised devices. Your compromised toaster can just keep joining those DDOS campaigns, as long as it doesn't torrent anything it's never going to reflect on you.
jcgrillo•1h ago
layer8•1h ago
jcgrillo•1h ago
palmotea•52m ago
I do. If Tesla sells something called "full self-driving," and someone treats it that way and it kills them by crashing into a wall, I totally blame Tesla for the death.
jcgrillo•44m ago
Blaming people is how we can control this kind of thing. If we try to blame machines, or companies, it will be uncontrollable.
wtallis•1h ago