I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
That won't entirely weed out these tropes, but it will massively change the style.
Then add a few specific rules and make it review its writing, instead of expecting it to get it right while writing.
To weed out the tropes is largely a question of enforcing good writing through rules.
A whole lot of the tropes are present because a lot of people write that way. It may have been amplified by RLHF etc., but in that case it's been amplified because people have judged those responses to be better - after all that is what RLHF is.
Not least because a lot of these things are things that novice writers will have had drummed into them. E.g. clearly signposting a conclusion is not uncommon advice.
Not because it isn't hamfisted but because they're not yet good enough that the links advice ("Competent writing doesn't need to tell you it's concluding. The reader can feel it") applies, and it's better than it not being clear to the reader at all. And for more formal writing people will also be told to even more explicitly signpost it with headings.
The post says "AI signals its structural moves because it's following a template, not writing organically. But guess what? So do most human writers. Sometimes far more directly and explicitly than an AI.
To be clear, I don't think the advice is bad given to a sufficiently strong model - e.g. Opus is definitely capable of taking on writing rules with some coaxing (and a review pass), but I could imagine my teachers at school presenting this - stripped of the AI references - to get us to write better.
If anything, I suspect AI writes like this because it gets rewarded in RLHF because it reads like good writing to a lot of people on the surface.
EDIT: Funnily, enough https://tropes.fyi/vetter thinks the below is AI assisted. It absolutely is not. No AI has gone near this comment. That says it all about the trouble with these detectors.
"Respond within 4-12 hours."
"Do not respond between midnight and 6am EST." (Or CET, whatever makes sense.)
Right now the most obvious traits are the well-known ones that are hard for most LLMs to shake off. Em-dashes, word choices, and the very limited ways in which they structure sentences. Terseness and conciseness is also a tell, which sucks.
a great link to share around !
now ive been wondering - what is the polite way to exit a conversation when it becomes obvious that your fellow interlocutor is merely a chunk of electric meat redirecting the output of sam altman? im talking blatantly obvious eg. 'its not x, its y' multiple times in the same paragraph.
Kinda similar to the ye olde newsgroup custom of replying "plonk" when you add someone to your killfile.
Nah, that's not natural even if a living person does it without the help of a LLM.
newcorpospeak, perhaps. Not natural.
The bots are going to win this war. I'm not sure of the implications of what this means though.
- "control plane", a media ecosystem where everything could be fake
- "ground plane", in-person gatherings and demonstrations, which are much harder to fake but have extremely limited access to information and are easily suppressed
There was a really interesting talk given by Mathias Shindler (long time editor of German Wikipedia) at the 39C3 conference about this topic a few months back that is worth a watch for anyone interested in the issue: https://youtu.be/fKU0V9hQMnY
https://devcommunity.x.com/t/update-to-reply-behavior-in-x-a...
> Moving forward, replies via the API will only be permitted if the replier has been explicitly summoned by the original post’s author. This means: The original author @mentions the replying user/account in their post, or The original author quotes a post from the replying user/account.
So you are saying the bots go to sleep? Not a very smart allegation.
Google has spent billions trying to distinguish bots from users. And has been largely unsuccessful n
When in actuality what it did was kill all the fun and entertaining bots due to API limitations and leaving only the people willing to pay the $$ for a checkmark and paying for the API access.
He says a lot of shit.
Robots are the new cars. The Moon is the new Mars. Turn, turn, turn.
This raises a rats nest of issues, but will we be able to avoid this necessity?
So... you can't win.
I wonder if it is possible at all to have anonymity without admitting bots.
FML we better develop social norms around this asap because this fuckin blows
AI in the middle makes colleagues more tolerable if you didn't really get along with them well originally.
This is a complex problem. But the first step of that problem is Twitter/X
Avoid it, and the next step toward a solution may be easier.
I still don't understand why people use his platform and give him power he has, and we have seen that he is using that to reduce children's access to food, promote people who are examples of no ethics whatsoever and is actively working on destroying numerous democracies by spreading propaganda from right wing.
One thing giving him power to do this are users of his platforms, and anyone still on Twitter is contributing to this.
The problem is that he doesn't care about the money, so he can fuel his rage bait machine as long as he wants which would be normally not possible.
> AI-generated replies really are the scourge of Twitter these days. Anyone know if it's from packaged solutions being sold as a product or if it's people mainly rolling their own custom reply-bots
> ... and I just found out the category name for this is "reply guy" tools which is so on the nose it hurts
(You can confirm this by Google searching "reply guy service".)
The more determined salesmen last for 3-4 emails, but most drop off after 2 or so.
Especially for my parents who are getting targeted like crazy by telemarketers
BoredPositron•1h ago