They could offer value, but just rarely, at least with the LLM/model/context they used.
> toll it takes to deal with these mind-numbing stupidities.
Could have a special area for submitting these where AI does the rejection letter and banning.
Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?
So…
I don’t care how that sausage is made. Heck, sometimes gen AI even allows people who otherwise wouldn’t have had the time or skills to come up with funny things.
What annoys me is all the spam SEO-gamed websites with low information density drowning the answer I’m actually looking for in pages of empty sentences.
When they haven’t just gamed their way to the top of search results without actually containing any answer.
And that didn’t need LLMs to exist. Just greed and actors with interests unaligned with mine. Such as Google’s former head of ads, apparently. [0][1]
What they did was:
1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)
2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.
3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.
4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.
It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.
The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.
Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...
The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions
My guess is that the aggression is part of the ruse. Trying to start drama/intimidating the other when your bluff is being called out is the oldest strategy...
(You could see a similar pattern in the xz backdoor scheme, where they were deliberately causing distress for the maintainer to lower their guard.)
Or maybe the guy here hoped that the reviewers would run the demo - blindly - and then somehow believe it was real? Because it prints some scary messages and then does open a shell. Even if that's the only thing it does...
Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.
> Could have a special area for submitting these where AI does the rejection letter and banning.
So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!
Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook
Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.
The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.
I’ve had mixed results. Most maintainers are happy to receive a well formatted update to their documentation. Some get angry at me for submitting non-code updates. It’s weird
But updating dependencies and such is totally unproductive. It's contributing for the sake of having contributed in its purest form. The only thing that's worse is opening a PR to add a political banner to someone else's readme, and then getting very pissed off when they respectfully close it.
Such as the buffer length check in [1] where the report hallucinated an incorrect length calculation and even quoted the line, then completely ignored that the quoted line did not match what the report was talking about and was in fact correct.
So essentially, can we put up a gaslighting filter?
It seems like those kinds of inconsistencies could be found, ironically, by an LLM.
I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..
Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.
The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.
I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).
But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.
And when i say "non-anonymity" i don't mean "public". You can be non-anonymous with one person not the whole world.
LLMS are the perfect tool to annihilate online communities. I wonder when we see the first deliberate attack. These incidents seem (so far) isolated and just driven by greed.
That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.
Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.
AI just allows them to be more effective.
It supports almost every file-related networking protocol under the sun and a few more just for fun. (https://everything.curl.dev/protocols/curl.html)
Meanwhile 99.8% of users (assuming) just use it for HTTP.
Here's a few complex protocols I bet many do not know that curl supports:
- SMB
- IMAP
- LDAP
- RTMP
- RTSP
- SFTP
- SMTP
At the very least, this magnifies the cost of dealing with AI slop security reports and sometimes also the risk for users.
No.
It's not hard to imagine how that would miss the 99.x% users who just want to download an HTTP/S resource after reading an instruction on some web page.
The issue highlighted in the article is people using AI to invent security problems that don’t exist. That doesn’t go away, no matter much you stripped down or simplify the project.
I’d bet an AI writing tool will happily generate thousands of realistic looking bug reports about a “Hello World” one-liner.
However, curl/libcurl would cripple itself and alienate a good portion of its userbase if it stopped supporting so many protcols, and specific features of protocols.
There's a similar argument made all the time: "I only use 10% of this software". But it doesn't mean they should get rid of the other 90% and eliminate 100% of someone else's use of 10% of the software...
And the real trouble is, there's there's no guarantee that your bargain with the devil would actually reduce the number of false reports, or reduce the time needed to determine they're false. It does not appear that the report submitters directly correlate to size of attack surface. The example AI slop includes several reports that don't even call curl code, and yet the wording claims that they do. There's no limit to the scope of bad reports!
Ohhh, copy and pasted a bit too much there.
These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.
Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.
The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.
I like to use the example of chess. I know that computers can beat human players and that there are technical advancements in the field that are useful in their own right, but I would never consistently watch a game of chess played between a computer and a human. Why? Because I don't care for it. To me, the fun and excitement is in seeing what a HUMAN can achieve, what a HUMAN can create - I apply the same logic to art as well.
As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!
Seeing someone prompt an AI, wait half-a-minute and then post it on social media does not, even if the end result is of a reasonable quality.
To me AI generated art without repeated major human interventions is almost immediately obvious. There are things it just can't do.
But when the art style is more minimalist or abstract, I find it genuinely difficult to notice a difference and have to start looking at the finer details, hence the mental workload comment. Often times I'll notice an eye not facing the right direction or certain lines appearing too "repetitive", something I rarely see in the works of human artists. It's difficult to explain without actual inage examples in front of me.
You can’t know that for sure. It’s the toupée fallacy.
The success rate of AI evading detection will only increase; the issue with "too many fingers" was solved years ago, and there's probably companies actively working on avoiding AI detection already. And on detecting it. It's the new spam / anti-spam, virus / anti-virus arms race.
So what the human is achieving in this case is having been trained by AI.
Today I learned: LLMs and their presence in society eventually force one to producing/crafting/making/creating for fun instead of consuming for fun.
All jokes aside, you got the solution here. ;)
It is, if you're managing a warehouse; then it's a wonderful marvel. And it is a hidden benefit to everyone who receives cheaper products from that warehouse. Nobody cares if it's a human or the Komatsu doing the heavy lifting.
It is patently obvious (though clearly not to every one) that the person you’re replying to is describing a situation of seeing a human weightlifter VS a mechanical forklift doing the same in a contest, for entertainment. The analogy works as a good example because it maps to the original post about art.
When you change the setting to a warehouse, you are completely ignoring all the context and trying to undermine the comment on a technicality which doesn’t relate at all to the point. If you want to engage with the analogy properly, you have to keep the original art context in mind at all times.
Of course, that could definitely happen. My point is that I don’t think it did in this case, and that it stretched the analogy so far beyond its major points, that it made it clear to me a pattern that I have seen several times before but could never pinpoint clearly.
I am grateful to that comment for the insight. Understanding how people may distort analogies will force me to create better ones for more productive discussions.
All analogies are limited. That’s the point of an analogy: to focus on a shared element.
https://www.oxfordreference.com/display/10.1093/acref/978019...
> There are indeed people who get great entertainment out of machines that do heavy lifting, and they don't care how much a person can lift.
But crucially not the person making the analogy. They didn’t say a lifting machine would be uninteresting to everyone, only that it isn’t worth their (the commenter’s) time. They made an analogy to explain what they themselves think, not to push their point of view as ultimate universal truth.
But that particular reply did not constrain itself to the context. It implied that a forklift lifting 150kg is not interesting at all. Which offends those of us who do appreciate watching heavy machinery do its work. That explains the unavoidable kneejerk replies of "actually, it is [interesting]".
People post AI art to take credit for being a skilled artist, just like people posting others art as their own. Its lame.
If I am to be a bit controversial among artists; we're exposed to so much good art today that most art posted online is "average" at best. (The bar is so high that it takes 20+ years to become above average for most)
Its average even if a human posted it but fun because a human spent effort making something cool. When an ai generates average art its ... just average art. Scrolling google images to look at art is also pretty dull, because its devoid of the human behind.
But I think thats less generally applicable. There is alot of the online art community and market that is really circular; people who like art do art and buy/commission art from others they follow and know.
That market will likley never disappear even if AI fully surpass humans; because the specific humans were the point all along.
But I think my previous comment applies more broadly beyond that community.
It's about implications, and trust. Note how AI art is thriving on platforms where it's clearly marked as such. People then can go into it by not having the "hand crafted" expectation, and enjoying it fully for what it is. AI-enabling subreddits and Pixiv comes to mind for example.
It's like spam. Something like art used to have a value, like a message, because someone had to go to the trouble of making it. Now, nothing matters or has value because it's a flood of meaninglessness, like spam, so it stops mattering whether you can tell if it's a scam or whether maybe some random person DOES have a neat idea for stopping my back from hurting.
That person, and any artist, is now out of luck, because of volume. There are too many cheap reasons to pretend to be them, and like spam it's just unmanageable.
The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.
Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.
Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.
There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.
We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.
There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.
Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.
I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.
So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.
I think that speaks volumes about how much time goes into the actual discoveries.
Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.
Those people wouldn't care about the bounty, overwhelming the system would be the point.
To say nothing of the uses of real exploits: that's salient.
I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).
However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.
I don't know if the link you posted answers the question, I get a blocked page ("You are visiting this page because we detected an unsupported browser"). You'd think a chromium-based browser would be supported but even that isn't good enough. I love open standards like html and http...
Edit: just noticed it goes to hackerone and not curl's own website. Of course they'd say curl can't solve payments on their own
That idea was considered and rejected in the article:
> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.
...or, just add it to the generated text themselves.
This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.
> The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)
Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.
They heard that open source contributions gets jobs, so they spam contributions to famous projects.
No need to throw out the baby with the bathwater.
I can't necessarily block them because I might need them for a password reset or a future transaction. I spent a while trying to set up filters that work, but eventually got tired of trying to fight it.
Now I mostly just avoid email for anything other than these things, and have just given up on it otherwise.
Spam won. Email lost.
And social media services are losing in similar ways. Rather than showing me things from people I subscribe to, they're pushing all kinds of automated "content for you" that I don't want and can't opt out of. So I'm pulling away from them as well.
The internet is filling with slop and distraction, both AI generated and not, and we haven't "solved" it, it's constantly getting worse.
The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.
It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.
> > hey chat, give this in a nice way so I reply on hackerone with this comment
> This looks like you accidentally pasted a part of your AI chat conversation into this issue, even though you have not disclosed that you're using an AI even after having been asked multiple times.
A sample of what they have to deal with. Source:
"hey chat, give this in a nice way so I reply on hackerone with this comment" is not language used naturally. It virtually never precedes high-quality conversation between humans so you aren't going to get that. You would only say this when prompting an LLM (poorly at that) so you are activating weights encoding information from LLM slop in the training data.
https://github.com/permissionlesstech/bitchat/pull/155
"Security audit: Fix critical encryption vulnerabilities"
https://github.com/permissionlesstech/bitchat/pull/219
"Satellite networking extension for BitChat". It doesn't even look like it's going to work...
https://github.com/permissionlesstech/bitchat/pull/199
"Secure Enclave key generation and retrieval to KeychainManager". Same folk, replying to themselves with AI.
konsalexee•1d ago
My bet is that git hosting providers like GitHub etc. should start providing features to allow us for better signal/noise ratio
detaro•1d ago
hiccuphippo•1d ago
nkrisc•1d ago
oytis•1d ago
Keyframe•1d ago
nkrisc•1d ago
konsalexee•1d ago
notachatbot123•1d ago
nkrisc•1d ago
Applejinx•1d ago
And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.
rwmj•1d ago
https://gitlab.com/ququruza