I think it’s actually more ethical to conduct and publish them publicly, as long as names are redacted; so people become aware, then more distrustful and resilient to online manipulation. The key point is, I doubt punishment will meaningfully reduce these experiments, because it’s impossible to reliably detect AI-generated text and “experiments” from genuine conversation; it will only stop them from being public and deter those with moral goals. The next best solution is to reinforce the idea that many things on the internet are fake, and show people what to look out for; publishing studies like this does that.
A counter-argument is that the above reasoning works for many unethical acts, like petty shoplifting, and the world would be a worse place if people weren't nonetheless deterred. But I doubt it actually works for anything that isn't super-common; although it seems like you could easily get away with petty shoplifting, there are actually many ways stores can prevent it (cameras and EAS, and in more extreme cases locking items or employing receipt checkers), whereas a good AI-generated story is indistinguishable from a bland authentic one, and a smart AI using the web is indistinguishable from a human. Also, it's objectively worse for a store if e.g. 1100 people shoplift instead of 1000, but if 1100 bad actors manipulate people online, I don't know if it's worse than 1000; the extra people who are manipulated suffer, but online manipulation is so common they almost certainly suffer anyways, and once they suffer one time they become resilient to later manipulation. Lastly, this isn't "suffering" like physical harm or loss of property, and it already affects almost everybody, so if conducting public experiments has benefits, it may still be overall more ethical than doing nothing.
Reddit has extra incentive to sue these experimenters because it wants to be seen as genuine. But discouragement won’t affect its actual authenticity, and it makes its apparent authenticity worse because of the Streisand Effect. Instead, I suggest Reddit focuses on bot-proofing the site, then challenges people to manipulate it and publish their findings: “researchers tried to run a bot experiment on Reddit, but failed” would be much more favorable than “researchers ran a successful bot experiment on Reddit, now Reddit is suing them”. Unfortunately as mentioned, AI-generated text is indistinguishable from authentic text, so while Reddit can attempt to detect and ban bots, specifically I suggest it a) has some other mechanism (e.g. trusted and/or paid accounts) to reduce online manipulation to negligible levels, b) improves its algorithm so AI-generated content only gets upvoted if it's "good", or (if choosing b also) c) encourages its users to be more openly distrustful of its content (which could be just adding a prominent disclaimer "Be skeptical! Don't believe any stories or suggestions here without evidence! People lie on the internet, one person may use thousands of bots to fake a majority opinion, and moderators may have deleted the dissenting comments!").
armchairhacker•3h ago
I think it’s actually more ethical to conduct and publish them publicly, as long as names are redacted; so people become aware, then more distrustful and resilient to online manipulation. The key point is, I doubt punishment will meaningfully reduce these experiments, because it’s impossible to reliably detect AI-generated text and “experiments” from genuine conversation; it will only stop them from being public and deter those with moral goals. The next best solution is to reinforce the idea that many things on the internet are fake, and show people what to look out for; publishing studies like this does that.
A counter-argument is that the above reasoning works for many unethical acts, like petty shoplifting, and the world would be a worse place if people weren't nonetheless deterred. But I doubt it actually works for anything that isn't super-common; although it seems like you could easily get away with petty shoplifting, there are actually many ways stores can prevent it (cameras and EAS, and in more extreme cases locking items or employing receipt checkers), whereas a good AI-generated story is indistinguishable from a bland authentic one, and a smart AI using the web is indistinguishable from a human. Also, it's objectively worse for a store if e.g. 1100 people shoplift instead of 1000, but if 1100 bad actors manipulate people online, I don't know if it's worse than 1000; the extra people who are manipulated suffer, but online manipulation is so common they almost certainly suffer anyways, and once they suffer one time they become resilient to later manipulation. Lastly, this isn't "suffering" like physical harm or loss of property, and it already affects almost everybody, so if conducting public experiments has benefits, it may still be overall more ethical than doing nothing.
Reddit has extra incentive to sue these experimenters because it wants to be seen as genuine. But discouragement won’t affect its actual authenticity, and it makes its apparent authenticity worse because of the Streisand Effect. Instead, I suggest Reddit focuses on bot-proofing the site, then challenges people to manipulate it and publish their findings: “researchers tried to run a bot experiment on Reddit, but failed” would be much more favorable than “researchers ran a successful bot experiment on Reddit, now Reddit is suing them”. Unfortunately as mentioned, AI-generated text is indistinguishable from authentic text, so while Reddit can attempt to detect and ban bots, specifically I suggest it a) has some other mechanism (e.g. trusted and/or paid accounts) to reduce online manipulation to negligible levels, b) improves its algorithm so AI-generated content only gets upvoted if it's "good", or (if choosing b also) c) encourages its users to be more openly distrustful of its content (which could be just adding a prominent disclaimer "Be skeptical! Don't believe any stories or suggestions here without evidence! People lie on the internet, one person may use thousands of bots to fake a majority opinion, and moderators may have deleted the dissenting comments!").