It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.
“Wow! Seems like it’s so easy to change over with savings like that!”
Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.
Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?
"For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.
Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.
Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.
It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
If we don’t police our side nobody will.
rozumem•1h ago