They could offer value, but just rarely, at least with the LLM/model/context they used.
> toll it takes to deal with these mind-numbing stupidities.
Could have a special area for submitting these where AI does the rejection letter and banning.
Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?
So…
What they did was:
1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)
2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.
3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.
4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.
It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.
The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.
Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...
The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions
My guess is that the aggression is part of the ruse. Trying to start drama/intimidating the other when your bluff is being called out is the oldest strategy...
(You could see a similar pattern in the xz backdoor scheme, where they were deliberately causing distress for the maintainer to lower their guard.)
Or maybe the guy here hoped that the reviewers would run the demo - blindly - and then somehow believe it was real? Because it prints some scary messages and then does open a shell. Even if that's the only thing it does...
Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.
> Could have a special area for submitting these where AI does the rejection letter and banning.
So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!
Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook
Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.
The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.
I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..
Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.
The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.
I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).
But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.
LLMS are the perfect tool to annihilate online communities. I wonder when we see the first deliberate attack. These incidents seem (so far) isolated and just driven by greed.
That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.
Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.
AI just allows them to be more effective.
It supports almost every file-related networking protocol under the sun and a few more just for fun. (https://everything.curl.dev/protocols/curl.html)
Meanwhile 99.8% of users (assuming) just use it for HTTP.
Here's a few complex protocols I bet many do not know that curl supports:
- SMB
- IMAP
- LDAP
- RTMP
- RTSP
- SFTP
- SMTP
At the very least, this magnifies the cost of dealing with AI slop security reports and sometimes also the risk for users.
No.
It's not hard to imagine how that would miss the 99.x% users who just want to download an HTTP/S resource after reading an instruction on some web page.
The issue highlighted in the article is people using AI to invent security problems that don’t exist. That doesn’t go away, no matter much you stripped down or simplify the project.
I’d bet an AI writing tool will happily generate thousands of realistic looking bug reports about a “Hello World” one-liner.
However, curl/libcurl would cripple itself and alienate a good portion of its userbase if it stopped supporting so many protcols, and specific features of protocols.
There's a similar argument made all the time: "I only use 10% of this software". But it doesn't mean they should get rid of the other 90% and eliminate 100% of someone else's use of 10% of the software...
And the real trouble is, there's there's no guarantee that your bargain with the devil would actually reduce the number of false reports, or reduce the time needed to determine they're false. It does not appear that the report submitters directly correlate to size of attack surface. The example AI slop includes several reports that don't even call curl code, and yet the wording claims that they do. There's no limit to the scope of bad reports!
Ohhh, copy and pasted a bit too much there.
These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.
Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.
The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.
I like to use the example of chess. I know that computers can beat human players and that there are technical advancements in the field that are useful in their own right, but I would never consistently watch a game of chess played between a computer and a human. Why? Because I don't care for it. To me, the fun and excitement is in seeing what a HUMAN can achieve, what a HUMAN can create - I apply the same logic to art as well.
As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!
Seeing someone prompt an AI, wait half-a-minute and then post it on social media does not, even if the end result is of a reasonable quality.
To me AI generated art without repeated major human interventions is almost immediately obvious. There are things it just can't do.
But when the art style is more minimalist or abstract, I find it genuinely difficult to notice a difference and have to start looking at the finer details, hence the mental workload comment. Often times I'll notice an eye not facing the right direction or certain lines appearing too "repetitive", something I rarely see in the works of human artists. It's difficult to explain without actual inage examples in front of me.
You can’t know that for sure. It’s the toupée fallacy.
So what the human is achieving in this case is having been trained by AI.
Today I learned: LLMs and their presence in society eventually force one to producing/crafting/making/creating for fun instead of consuming for fun.
All jokes aside, you got the solution here. ;)
It is, if you're managing a warehouse; then it's a wonderful marvel. And it is a hidden benefit to everyone who receives cheaper products from that warehouse. Nobody cares if it's a human or the Komatsu doing the heavy lifting.
It is patently obvious (though clearly not to every one) that the person you’re replying to is describing a situation of seeing a human weightlifter VS a mechanical forklift doing the same in a contest, for entertainment. The analogy works as a good example because it maps to the original post about art.
When you change the setting to a warehouse, you are completely ignoring all the context and trying to undermine the comment on a technicality which doesn’t relate at all to the point. If you want to engage with the analogy properly, you have to keep the original art context in mind at all times.
People post AI art to take credit for being a skilled artist, just like people posting others art as their own. Its lame.
If I am to be a bit controversial among artists; we're exposed to so much good art today that most art posted online is "average" at best. (The bar is so high that it takes 20+ years to become above average for most)
Its average even if a human posted it but fun because a human spent effort making something cool. When an ai generates average art its ... just average art. Scrolling google images to look at art is also pretty dull, because its devoid of the human behind.
It's about implications, and trust. Note how AI art is thriving on platforms where it's clearly marked as such. People then can go into it by not having the "hand crafted" expectation, and enjoying it fully for what it is. AI-enabling subreddits and Pixiv comes to mind for example.
The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.
Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.
Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.
There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.
We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.
There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.
Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.
I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.
So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.
I think that speaks volumes about how much time goes into the actual discoveries.
Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.
I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).
However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.
I don't know if the link you posted answers the question, I get a blocked page ("You are visiting this page because we detected an unsupported browser"). You'd think a chromium-based browser would be supported but even that isn't good enough. I love open standards like html and http...
Edit: just noticed it goes to hackerone and not curl's own website. Of course they'd say curl can't solve payments on their own
That idea was considered and rejected in the article:
> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.
This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.
> The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)
Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.
They heard that open source contributions gets jobs, so they spam contributions to famous projects.
The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.
It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.
konsalexee•3h ago
My bet is that git hosting providers like GitHub etc. should start providing features to allow us for better signal/noise ratio
detaro•3h ago
hiccuphippo•2h ago
nkrisc•2h ago
oytis•2h ago
Keyframe•2h ago
nkrisc•2h ago
notachatbot123•1h ago
nkrisc•1h ago
Applejinx•2h ago
And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.
rwmj•1h ago
https://gitlab.com/ququruza