Is it truly considered as such by the actual end-users versus the company leadership that has a vested interest in cultivating that idea toward marketers? I've been on Reddit since 2011. Between 2011 and 2016, the site felt very human. From 2016 onward, the site has progressively felt less and less human.
This is, of course, anecdotal and contrary to the increased # of users the platform acquires every single year but mirrors scores of complaints from other users on how ingenuine the platform has felt.
Perhaps related? I've been noticing posts that occupy Top/Hot/Best have been increasingly made by accounts without so much as a 'Verified Email' badge [1], something Reddit has historically not enforced too heavily but is very easily abused by bad-actors when a barrier to entry is effectively non-existent. These accounts share similar traits: generally palatable posts (usually reposts) scattered across various subreddits, comment history (if any) follows the same styling where it is generally palatable, non-controversial statements that are easily upvoted.
About ~7 years ago, u/KeyserSosa [2] acknowledged an influence campaign on Reddit. An evergreen comment from that thread is:
> I am worried by just how... normal these accounts seem. How can we ever hope to weed out influencers who subvert social platforms like this one if they are so good at hiding it? Can neural algorithms even deal with this?
The ubiquity of inauthentic, AI generated content appearing before the real human end-users that is enabled by the very low barrier to entry will lend itself to more articles like this being written months and years from now, unless Reddit makes some sort of qualitative changes -- something we have a pattern of previous behavior to weigh against that doesn't inspire confidence.
[1] https://www.redditstatic.com/awards2/verified_email-40.png
[2] https://www.reddit.com/r/announcements/comments/9bvkqa/an_up...
A favorite comment that I've read here on HN is this [3] and it applies so well to the modern social media ecosystem.
> My take is, if a community is constrained by quality (eg moderation, self-selecting invite-only etc) then the only way it grows is by lowering the threshold. Inevitably that means lower quality content. To some extent, more people can make up for it. Eg if I go from 10 excellent artists to 1000 good ones, chances are that the top 10% artwork created actually gets better.
> But eventually if you grow by lowering quality, then, well, quality drops.
> I suppose for very small societies, they may be limited by discoverability/cliquiness and not quality, so their growth doesn’t mesh with quality and so they could also get better with size.
> Note, “quality” doesn’t have to mean good/bad but also just “property”. When Facebook started, it was for kids from elite schools. It then gradually diluted that by lowering that particular bar. Then it was for kids from all schools. Then young people. Then their parents too. Clearly, it’s far from dying in absolute terms, but it’s certainly no longer what it initially was. To many initial users, it’s as good as dead though.
Reddit certainly started going downhill once they realized they could actually be a public company and make tons of money. Their fate was sealed when they got rid of the user-centric apps like Alien Blue to force their gamified apps ("keep your 18 day streak going!")
Other public platforms like Facebook, YouTube, TikTok, and Xitter certainly haven't cared much to try to eliminate AI slop and if anything have embraced it. I can't see Reddit doing anything differently.
fixed it
Natfan•10h ago