This is the insipid blathering of a woke cretin.
It is in fact widespread reliance of AI that will hinder groups of people from acquiring the skills to be in that class.
The idea that some internet randos commenting "AI could have written that" have gatekeeping power, preventing people from becoming knowledge workers, is preposterous.
The one way in which it is plausible is that the work to which the remark is applied is not in fact written by AI, but its author becomes convinced by the remark that such work could be written by AI, and that author adopts AI as a result. Their newfound habit will subsequently rot their brain, sending them plumetting off the ladder toward the knowledge class. I jest, but only half so.
Actually, let's examine this "systematic exclusion" claim.
Firstly, the basic premise of the article is that "this could have been written by AI" is disparagement. But disparagement is nothing new. If disparagement is intended, all one needs is "this was written by an idiot". I think that "this could have ben written by AI" is much softer. In fact, a possible interpretation of it is that the speaker believes in the use of AI, and that it could have been used to save time in producing something of the same quality. Anyway, we've had disparagement in online forums going back to dial up BBSes; it's just a new variant on plain old flaming.
If the remark is disparagement, does it add up to "systematic exclusion of underprivileged groups"?
In forums and social media, people mostly don't care who you are and are responding to the content. If it looks like AI slop, they don't care whether the person behind the pseudonym is a Stanford professor, or a german shepherd, and just turn on their flamethrower.
Let's say that systematic exclusion is happening in spite of commenters not actually targeting disadvantaged groups, but only responding to the content. What that systematic exclusion hypothesis then entails is that posts from underprivileged groups are actually garbage, and therefore attract more disparagement!
So in fact it is the author of this paper who holds a cynical, discriminatory view of underprivileged groups (whoever he imagines them to be, exactly). Underprivileged groups are morons who write garbage that could be written by AI (and thus precisely receive comments to that effect); and, moreover, are so weakly constituted that these discouraging comments prevent them from entering a knowledge professional class (in addition to the main factor, that being their lack of ability).
Someone not a member of an underprivileged group either does not write posts that are reminiscent of AI drivel, and so doesn't attract those comments, or even if he or she does, the negative commends slide right off due to their thicker skin.
Oh really? Some of the thinnest skins in the world come from privilege: for instance, think of the middle-aged man-child who buys an entire social network for billions in order to be able to suppress critical comments about himself.
I think it may be better to say that the author has an agenda or is co-opting real issues, but I can't think of an elegant way to phrase that.
I don't really agree with the general argument, though. I don't think painting this as an "AI Slop" issue is fair. Online communities are quicker (and quieter!) when dismissing obvious AI Slop than when dismissing legitimate discourse that looks like AI, or was cleaned-up with AI, or even it just uses Em––Dashes. Perhaps the excusable usage is marking content as machine-translated, which of course causes other disadvantages for the poster. But of course that's just one point of view.
Seems to be very hard to realize that in the US. But from the outside, both ends are batshit insane.
The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.
People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.
It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.
And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.
This entire comment reeks of not actually understanding anything.
Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.
s/
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?
I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.
So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.
The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.
I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.
Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.
The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.
It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.
In AI discussions the relevance of Poe's law is rampant. You can't never tell what is parody or what is not.
There was a (former) xAI employee that got fired for advocating the extinction of humanity.
The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT
If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.
This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.
And it's all wrapped in a lovely package of AI apologetics - wonderful.
So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.
andsoitis•5h ago
deverton•5h ago
tomhow•3h ago