Though considering the reddit creators were pretending to be different users to fake engagement, it wouldn't surprise me if they also led the charge in the Digg exodus.
But I actually wonder if you need the “social graph” at all at first. You could start by making it a place where people post jobs and apply for them. The network effects will come later. All you need is maybe a simple algorithm that matches users to postings they might qualify for.
You wouldn't cringe at LLM output because there's no social context to the situation depicted. You could identify the words as cringeworthy but the emotional subtext would be missing.
Which is why the argument, "if the LLM produces good output, why does it matter where it came from?" doesn't land for many. Art is more than its surface content, AI is exposing a split between camps that do and don't see it that way.
The cringe will be just as strong as most of the existing content is just copypastas that have gotten some traction previously. Once a human hand invokes the LLM they have taken ownership and "approved this message" so I can still quietly and privately laugh at them.
I would love to see this get litigated.
Is that an appropriate legitimate interest or do we need to split hairs?
But what kind of content? The fake, AI-generated posts made on my behalf? That's probably not the case. Maybe it's a generic LLM training. Or will we constantly be evaluated and profiled by AI for the purpose of automating any future hiring, eliminating the need for human intervention? What a dystopian outlook. I want humans back.
Maybe some of this data is gatekept (I've only been able to view other people profiles' only after logging in) but I wouldn't trust Meta, the company that used stolen e-book libraries to train their LLMs, not to find ways around it.
articsputnik•4mo ago