the thing is, these people aren't necessarily wrong - they're just 1) clueless 2) early. the folks with proper know-how and perhaps tuned models are probably selling zero days found this way as we speak.
What an absolute shamble of an industry we have ended up with.
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
It’s a lose-lose situation for the maintainers
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
𓂫 ~ 𓃝 JdeBP𓆈localhost 𓅔 % 𓅭 pts/0
While rockets and hearts seem more like unnecessary abuse, there are a few icons that really make sense in CLI and TUI programs, but now I'm hesitant to use them as then people who don't know me get suspicious it could be AI slop.
Sane languages have much less of this problem but the damage was done by the cargo cultists.
Much like how curly braces in C are placed because back in the day you needed you punch card deck to be editable, but we got stuck with it even after we stared using screens.
I believe it was a technical documentation and the author wanted to create visual associations with acteurs in the given example. Like clock for async process of ordering, (food -) order, Burger etc.
I don't remember if I commented on the issue myself, but I do remember that it reduced readability a lot - at least for me.
https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...
As #9 on the leaderboard I feel like I need to defend myself.
But then, long before I had a Compose key, in my benighted days of using Windows, I figured out such codes as Alt+0151. 0150, 0151, 0153, 0169, 0176… a surprising number of them I still remember after not having typed them in a dozen years.
compose - -
and it makes an em dash, it takes a quarter of a second longer to produce this.
I don't know why the compose key isn't used more often.
[0]: https://en.wikipedia.org/wiki/Compose_key#Common_compose_com...
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
As a Swedish native it really breaks my reading of an English word, but apparently it's supposed to indicate that you should pronounce each "o" separately. Language is fun.
It's just confusing for us poor Swedes since "ö" in Swedish is a separate letter with its own pronunciation, and not a somehow-modified "o". Always takes an extra couple of seconds to remember how "Motörhead" is supposed to be said. :)
- Chatgpt uses mdashes in basically every answer, while on average humans don't (the average user might not even be aware it exists)
- if the preference for em dashes came from the training set, other AIs would show the same bias (gemini and Le chat don't seem to use them at all)
Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.
I call this technique: "sprAI and prAI".
Didn't want to be seen as just padding my github.
It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.
If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.
We have reviewed your claims and found that [the account impersonating your grandma] has not violated our guidelines.
(The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.
If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?
Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?
If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?
Me: "yes, as a matter of fact I am"
Interviewer: "Whats 14x27"
Me: "49"
Interviewer: "that's not even close"
me: "yeah, but it was fast"
“Is this your card?”
“No, but damn close, you’re the man I seek”
function getRandomNumber() {
return 4
}
It finishes "I can follow up ... blah blah blah ... should I find an issue"
Tone deaf and utterly infuriating.
BECAUSE YOU CAN'T FUCKING TRUST THOSE LYING HALLUCINATING PIECES OF SHIT.
I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.
But frankly security theatre was always going to descend into this with a thousand wannabe l33ts targeting big projects with LLMs to be "that guy" who found some "bug" and "saved the world".
Shellshock showed how bad a large part of the industry is. It was not a bug. "Fixing" it caused a lot of old tried and tested solutions to break, but hey, we as an industry need to protect against the lowest common denominator who refuse to learn better...
[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Open Source Contributor: - Diagnosed and fixed a key bug on Curl
With apologies for stereotyping.
The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.
What’s even the point…
> The reporter was banned and now it looks like he has removed his account.
[0] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s... [1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
I don't even... You just have to laugh at this I guess.
"The AI bubble is so big it's propping up the US economy" - https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
https://youtu.be/6n2eDcRjSsk?si=p5ay52dOhJcgQtxo -- AI slop attacks on the curl project - Daniel Stenberg. Keynote at the FrOSCon 2025 conference, August 16, in Bonn Germany by Daniel Stenberg.
Plus, linked above, his blogpost on the same subject https://daniel.haxx.se/blog/2025/08/18/ai-slop-attacks-on-th...
even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.
So open source projects would get bug reports like "my commercial static analysis tool says there's a problem in this function, but I can't tell you what the problem is."
Completely useless 99% of the time but that didn’t stop a good number of them following up asking for money, sometimes quite aggressively.
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
https://en.wikipedia.org/wiki/Batu_(given_name)
> Batu is a common masculine Central Asian name.
https://en.wikipedia.org/wiki/Batuhan
> Batuhan is a masculine Turkish given name.
LLMs produce so much text, including code, and most of it is not needed.
I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.
Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.
We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?
redbell•4h ago
mrsvanwinkle•2h ago
scns•2h ago
[0] https://www.bgnes.com/technology/chatgpt-convinced-canadian-...
Retr0id•1h ago
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.