It is ok to ask a lot of questions, it is ok to be skeptic of friendly interactions, it is ok to be suspicious. These behaviors are not social anxiety, not psychosis, not anti-social. They are, in fact, desirable human aspects that contribute to the larger group.
There is no automated detection, no magic way of keeping these new threats away. They work by exploring humans in vulnerable states. We need kind humans that are less vulnerable to those things.
Are you a human?
Is that real text you typed out?
Does anything you're saying have any meaning?
----
That is essentially what you are asking for. Every single online interaction immediately viewed as entirely suspect, with people having to go out of their way to prove they are…people.
Well perhaps you're right that this is where online culture is headed, but we don't have to like it. I hate it. I hate it so bad.
The other option is trying to make your bubble of protection and trust, where everyone is happy and friendly. Good luck with that.
We need smarter humans, it's the only way.
A nasty SEO company with vast resources could create thousands of accounts, even if they have an entry fee, if it determines that the entry fee is cheaper than the value they would get by spamming.
Separately, why are companies using this? Surely this is counter productive to their marketing efforts? Or maybe am I wrong and any attention is good?
That way, humans could impersonate AIs, but AIs would be legally encouraged, shall we say, not to impersonate humans.
"It could never be enforced" or "but there will be bad actors who don't do this" are useful and valid discussions to have, but I think separate to the question of if this would be a worthwhile regulatory concept to explore.
Search engines would probably skip any site that admits to being AI-generated.
pvg•7mo ago