With the advantage of hindsight, the issue should have been entirely dismissed, and the account reported as invalid, right at the third message (November 30, 2024, 8:58pm UTC); the fact that curl maintainers allowed the "dialog" to continue for six more messages shows to be a mistake and a waste of effort.
I would even encourage curl maintainers to upfront reject any report that fails to mention a line number in the source code, or a specific piece input that triggers an issue.
It's unfortunate that AI is being used to worsen the signal/noise ratio [1] of such sensitive topics such as security.
I think the only saving grace right this second is that the hallucinations are obvious and text generation is just awkward enough in overly-eager phrasing to recognize. But if you're seeing it for the first time, it can be surprisingly convincing.
Or have an infosec captcha, but that's harder to come by
This comment [1] by icing (curl staff) sums up the risk:
> "This report and your other one seem like an attack on our resources to handle security issues."
Maintainers of widely deployed, popular software, including those whom have openly made a commitment to engineering excellence [2] and responsiveness [like the curl project AFAICT], can not afford to /not/ treat each submission with some level of preliminary attention and seriousness.
Submitting low quality, bogus reports generated by a hallucinating LLM, and then doubling down by being deliberately opaque and obtuse during the investigation and discussion, is disgraceful.
[1] https://hackerone.com/reports/3125832#activity-34389935
[2] https://curl.se/docs/bugs.html (Heading: "Who fixes the problems")
titaniumrain•2h ago
weird_trousers•2h ago
All are wrong, with hallucinations, and reviewers clearly loses their time with that kind of things.
AI is here to accelerate people’s job(s). Not losing their mind and time.
Please read the news before responding. An AI can do that, why don’t you do that too…?
scott_w•2h ago
This isn’t fuzzing, this is a totally garbage report that I’d have chewed out any security “researcher” reporting this to me.
Given how it was the first link I clicked I feel safe in saying the probability is the rest are just as bad.
112233•2h ago
npteljes•2h ago
Take a look at this example: https://hackerone.com/reports/2823554 . The fool reporting this can't even justify his AI generated report, not even with the further use of AI. There is no AI revolution here, just spam, and grift.
Rygian•2h ago
fastball•1h ago
Propelloni•1h ago
Obviously, the logic doesn't hold. Anyway, asked to provide a specific line in a specific module where strcpy() is not bounds checked, the response is "probably in curl.c near a call to strcpy()." That moved from sloppy to stupid pretty quickly, didn't it?
And there are dozens if not hundreds of these kinds of reports. Hostility towards the reporters (whether AI or not) is justified.
e2le•1h ago
https://hackerone.com/reports/2887487
Given the limited resources available to many open source projects and the volume of fraudulent reports, they function similar to a DDOS attack.
whatevaa•1h ago
mpalmer•27m ago
Because they have no idea what they're doing and for some reason they think they can use LLMs to cosplay as security researchers.