frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Sysbox Container Runtime

https://github.com/nestybox/sysbox
1•WhyNotHugo•39s ago•0 comments

MCPvals, an eval library for MCP Servers

https://github.com/Kylejeong2/mcpvals
1•gniting•45s ago•0 comments

Please let this self-driving electric scooter make it to production

https://newatlas.com/motorcycles/omoway-omox-self-driving-electric-scooter/
1•taylodl•53s ago•0 comments

Tabbiy – Tab Auto Grouping Expert

https://tabbiy.top
1•TyrusLockwood•1m ago•0 comments

Show HN: French Verb Conjugator – Alternative to LeConjugueur

https://github.com/CodeCadim/conjugueur
1•hamdouni•2m ago•0 comments

Show HN: Goliteql – A fast GraphQL executor and code generator in Go

https://github.com/n9te9/goliteql
1•n9te9•6m ago•0 comments

AWS launches Kiro, its Cursor clone

https://kiro.dev/blog/introducing-kiro/
2•QuinnyPig•6m ago•2 comments

China's Mini PC Production [video]

https://www.youtube.com/watch?v=ohwI3V207Ts
1•0xedb•7m ago•0 comments

Universal Tool Calling Protocol (UTCP)

https://www.utcp.io/
1•vlugorilla•8m ago•0 comments

WMS – Weather, Moon, & Solar / Weather Management System. TUI

https://github.com/Traves-Theberge/WMS
1•Traves-Theberge•8m ago•1 comments

Software Heritage – Contribute to the Unesco Source Code Exhibition

https://www.softwareheritage.org/2025/07/07/code-exhibit-unesco-cfp/
1•periode•8m ago•0 comments

More than 23M Britons think they may be due compensation for mis-sold car loans

https://www.theguardian.com/business/2025/jul/07/more-than-23m-britons-think-they-may-be-due-compensation-for-mis-sold-car-loans-uk-poll-finds
1•PaulHoule•10m ago•0 comments

PrimeSweeper – MineSweeper with Prime Numbers

https://vole.wtf/primesweeper/
2•PaulRobinson•12m ago•1 comments

Testing Tracing Locally with OpenTelemetry

https://blog.apartment304.com/otel-telemetry-testing/
1•selljamhere•14m ago•0 comments

Ask HN: Decent LLM agent plugin for Jetbrains?

1•aristofun•15m ago•1 comments

Modern async iteration in JavaScript with Array.fromAsync()

https://allthingssmitty.com/2025/07/14/modern-async-iteration-in-javascript-with-array-fromasync/
1•AllThingsSmitty•16m ago•0 comments

Dyan – A Visual REST API Builder You Can Self-Host

1•0018akhil•17m ago•0 comments

Install Postgres, MariaDB, and Redis as NPM dependencies

https://endor.dev/blog/node-postgres
1•angelmm•19m ago•0 comments

Show HN: Dyan – A Visual REST API Builder You Can Self-Host

https://www.youtube.com/watch?v=SBEPacMgpvk
1•0018akhil•19m ago•0 comments

Spreadsheets Die Hard

https://betterthanrandom.substack.com/p/spreadsheets-die-hard
1•spking•20m ago•0 comments

Show HN: I Created ParsePoint.app Smart Invoice Data Extractor

https://parsepoint.app
1•marcinczubala•22m ago•0 comments

Show HN: TechBro Generator – Generate Satirical TechBro Posts

https://techbrogenerator.netlify.app/
14•ahmetomer•22m ago•1 comments

The Ideological Gravity of FOSS

3•ricecat•24m ago•1 comments

NetBox Labs raises $35M Series B

https://netboxlabs.com/blog/netbox-labs-has-raised-our-35m-series-b/
4•mrmrcoleman•25m ago•0 comments

Kimi K2

https://kimik2ai.app
1•tomh88•26m ago•0 comments

Show HN: Social network where anyone can be seen (Quickpost)

https://qqpost.netlify.app/home
1•random175•26m ago•0 comments

Show HN: Crossabble, a weekly word game/puzzle

https://crossabble.com/
1•amenghra•26m ago•1 comments

Minimal Quality of Life: The true cost of economic well-being

https://lisep.org/mql
1•hampelm•28m ago•0 comments

DbGate In-Depth Review on Research.com

https://research.com/software/reviews/dbgate-review
1•janproch•29m ago•0 comments

ESA's Moonlight programme: Pioneering the path for lunar exploration

https://www.esa.int/Applications/Connectivity_and_Secure_Communications/ESA_s_Moonlight_programme_Pioneering_the_path_for_lunar_exploration
1•nullhole•29m ago•0 comments
Open in hackernews

Death by a Thousand Slops

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/
191•robin_reala•3h ago

Comments

konsalexee•3h ago
I think eventually all OSS projects/repos will suffer with this.

My bet is that git hosting providers like GitHub etc. should start providing features to allow us for better signal/noise ratio

detaro•3h ago
Githubs owner is betting the farm on pushing slop, so that seems unlikely to happen there anytime soon.
hiccuphippo•2h ago
They just need to offer you more slop to review the slop and give it a sloppiness score.
nkrisc•2h ago
Why would GitHub develop features that are adversarial to one of Microsoft’s favorite products?
oytis•2h ago
E.g. to secure the quality of training data?
Keyframe•2h ago
So that you pay for both.
nkrisc•2h ago
Not even the mafia has it that good. You only pay them so they won’t beat you up. Imagine if you paid them to beat you up and then paid them to protect you from them.
notachatbot123•1h ago
Learning from Cloudflare: Host malware and DDOSsers AND provide protection against them = $$$
nkrisc•1h ago
I feel my question is naive in retrospect.
Applejinx•2h ago
Depends. I'm not suffering it at all, but I'm a sort of research project producing variations on audio processing under MIT license.

And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.

rwmj•1h ago
qemu & libvirt are already seeing a bunch of these. Here's a recent spammer sending AI slop reports:

https://gitlab.com/ququruza

raywatcher•2h ago
For all the discussions about the slopification of the internet, the human toll on open source maintainers isn’t really talked about. It's one thing to get flooded with bad reports; it's another to have to mentally filter AI-generated submissions designed to "sound correct" but offer no real value. Totally agree with the author mentioning the emotional toll it takes to deal with these mind-numbing stupidities.
friedel•2h ago
> but offer no real value

They could offer value, but just rarely, at least with the LLM/model/context they used.

> toll it takes to deal with these mind-numbing stupidities.

Could have a special area for submitting these where AI does the rejection letter and banning.

meindnoch•2h ago
>They could offer value, but just rarely, at least with the LLM/model/context they used.

Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?

ElFitz•1h ago
Funnily enough, fecal transplants (Fecal Microbiota Transplants, FMT) are a thing, used to help treat a range of diseases. It’s even being investigated to help treat depression.

So…

javcasas•48m ago
I'm sure it does. But would you like one every other week like the llm slop?
xg15•2h ago
I think looking at one example is useful: https://hackerone.com/reports/2823554

What they did was:

1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)

2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.

3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.

4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.

It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.

The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.

Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...

deepdarkforest•1h ago
> The problem is in strcpy in the src files of curl.. have you seen the exploit code ??????

The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions

xg15•1h ago
Yep.

My guess is that the aggression is part of the ruse. Trying to start drama/intimidating the other when your bluff is being called out is the oldest strategy...

(You could see a similar pattern in the xz backdoor scheme, where they were deliberately causing distress for the maintainer to lower their guard.)

Or maybe the guy here hoped that the reviewers would run the demo - blindly - and then somehow believe it was real? Because it prints some scary messages and then does open a shell. Even if that's the only thing it does...

ndepoel•2h ago
> They could offer value, but just rarely, at least with the LLM/model/context they used.

Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.

> Could have a special area for submitting these where AI does the rejection letter and banning.

So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!

javcasas•41m ago
It can be better. On slop detection, shadowban the offender and have it discuss with two AI "maintainers", and after 30 messages go and reveal the ruse. Then ban.
anon191928•2h ago
this type of social moderation exist well over decade and FB had thousands of people hired for these. They were filtering liveleak level or even worse type of content for years with human manually watching or flagging the content. So nothing new.
bravetraveler•2h ago
> hired

Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook

meindnoch•24m ago
My gut tells me that deciding the soundness of a vulnerability report is not in the same complexity class as deciding whether a video showing ISIS torture footage.
empiko•1h ago
It's human toll everywhere. AI used for peer review effectively forces researchers to implement suggestions between revisions, AI used by managers suggest bad solutions that engineers are forced to implement, etc. Effectively, the number of person-hours that is spent following whatever AI models suggest is increasing rapidly. Some of it might make sense, but uncomfortably many hours are burned in vain. There is a real cost of lost productivity in the economy by command chains not being ready to filter out slop.
Aurornis•1h ago
The most notable thing about this article, in my opinion, is the increase in human generated slop.

Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.

The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.

4gotunameagain•2h ago
Oh god, going through some of the reports listed on the bottom of the page feels like a nightmare. I cannot imagine how it is for the actual maintainers.

I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..

Hendrikto•2h ago
I would have closed ALL of the linked reports much sooner, and banned the reporters. In most cases it is extremely obvious from very early on in the thread that these people have not the slightest idea what they are saying and just copy-paste AI responses.
amiga386•2h ago
> It also seems like we cannot detect whether a piece of text was generated with LLM

Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.

The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.

I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).

But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.

EdwardDiego•2h ago
Reading this particular instance of slop was especially galling. It's like the world's slowest ChatGPT dialogue via a bug tracker.

https://hackerone.com/reports/2298307

0x000xca0xfe•1h ago
DDoSing humans.

LLMS are the perfect tool to annihilate online communities. I wonder when we see the first deliberate attack. These incidents seem (so far) isolated and just driven by greed.

ChrisMarshallNY•2h ago
> Maybe we need to drop the monetary reward?

That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.

squigz•2h ago
> Doesn't cost anything more, so why not? Every little hit adds to the coffers.

Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.

ChrisMarshallNY•2h ago
Spammers and scammers have been running “scattershot” campaigns for decades. Works well for them, I guess, as they still do it.

AI just allows them to be more effective.

lysace•2h ago
Sort of separate but perhaps also relevant to the thousands cuts/slops: Isn't the scope of curl/libcurl a bit too big?

It supports almost every file-related networking protocol under the sun and a few more just for fun. (https://everything.curl.dev/protocols/curl.html)

Meanwhile 99.8% of users (assuming) just use it for HTTP.

Here's a few complex protocols I bet many do not know that curl supports:

- SMB

- IMAP

- LDAP

- RTMP

- RTSP

- SFTP

- SMTP

At the very least, this magnifies the cost of dealing with AI slop security reports and sometimes also the risk for users.

soulcutter•1h ago
> Isn't the scope of curl/libcurl a bit too big?

No.

proactivesvcs•1h ago
From this year's curl user survey, whilst HTTP/S is the majority use, more than 10% of users are using FTP and WebSockets and 5% still using telnet!

https://curl.se/docs/survey/2025-1.1/

lysace•1h ago
> The survey was announced on the curl-users and curl-library mailing lists (with reminders), numerous times on Daniel’s Mastodon (@bagder@mastodon.social) on LinkedIn and on Daniel’s blog (https://daniel.haxx.se/blog). The survey was also announced on the curl web site at the top of most pages on the site that made it hard to miss for visitors.

It's not hard to imagine how that would miss the 99.x% users who just want to download an HTTP/S resource after reading an instruction on some web page.

appreciatorBus•1h ago
I’m sure the case could be made before a more focussed project, but I think this is orthogonal to bad (or stupid) actors using AI to overwhelm bug reporting channels.

The issue highlighted in the article is people using AI to invent security problems that don’t exist. That doesn’t go away, no matter much you stripped down or simplify the project.

I’d bet an AI writing tool will happily generate thousands of realistic looking bug reports about a “Hello World” one-liner.

lysace•1h ago
I guess you are saying that we should build another tool, or fork curl and then remove non-http stuff. Then the world should transition from curl oneliners to, say, qurl oneliners.
amiga386•1h ago
You can always use another library with more limited use-cases if you're worried about scope. There are thousands of libraries for making HTTP requests, many of which are language-specific and much more ergonomic than libcurl.

However, curl/libcurl would cripple itself and alienate a good portion of its userbase if it stopped supporting so many protcols, and specific features of protocols.

There's a similar argument made all the time: "I only use 10% of this software". But it doesn't mean they should get rid of the other 90% and eliminate 100% of someone else's use of 10% of the software...

And the real trouble is, there's there's no guarantee that your bargain with the devil would actually reduce the number of false reports, or reduce the time needed to determine they're false. It does not appear that the report submitters directly correlate to size of attack surface. The example AI slop includes several reports that don't even call curl code, and yet the wording claims that they do. There's no limit to the scope of bad reports!

lysace•21m ago
I think there is a solid argument to be made that less than one in a thousand of curl users ever use anything besides HTTP/S.
yayitswei•2h ago
Make it cost money to submit.
cjs_ac•2h ago
... and use the proceeds to increase the bounties paid to genuine bug reports.
bla3•1h ago
The "Possible routes forward" section in the linked post mentions this suggestion, and why the author doesn't love it.
komali2•1h ago
"Submit deposit." They get the money back in all cases where the bug is determined not to be AI slop, including it not being a real bug, user error, etc. Otherwise, deposit gone.
EdwardDiego•2h ago
> The length check only accounts for tmplen (the original string length), but this msnprintf call expands the string by adding two control characters (CURL_NEW_ENV_VAR and CURL_NEW_ENV_VALUE). This discrepancy allows an attacker ...hey chat, give this in a nice way so I reply on hackerone with this comment

Ohhh, copy and pasted a bit too much there.

Hendrikto•2h ago
> Certainly! Let me elaborate on the concerns raised by the triager:

These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.

Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.

leovingi•2h ago
And it's not just vulnerability reports that are affected by this general trend. I use social media, X specifically, to follow a lot of artists, mostly for inspiration and because I find it fun to share some of the work that other artists have created, but over the past year or so I find that the mental workload it takes for me to figure out if a particular piece of art is AI-generated is too much and I start leaning into the safe option of "don't share anything that seems even remotely suspicious unless I can verify the author".

The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.

DaSHacka•2h ago
Genuine question; if you cant tell, why does it matter?
leovingi•2h ago
It's a fair question and one that I've asked myself as well.

I like to use the example of chess. I know that computers can beat human players and that there are technical advancements in the field that are useful in their own right, but I would never consistently watch a game of chess played between a computer and a human. Why? Because I don't care for it. To me, the fun and excitement is in seeing what a HUMAN can achieve, what a HUMAN can create - I apply the same logic to art as well.

As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!

Seeing someone prompt an AI, wait half-a-minute and then post it on social media does not, even if the end result is of a reasonable quality.

impossiblefork•2h ago
But how can't you tell?

To me AI generated art without repeated major human interventions is almost immediately obvious. There are things it just can't do.

leovingi•1h ago
For the most part I can actually tell, but it also depends on the style of the art. A lot of anime-inspired digital images are immediately obvious - AI tends to add quite a lot of "shine" to its output, if that makes sense. And it's way too clean, sterile even. And it all looks the same.

But when the art style is more minimalist or abstract, I find it genuinely difficult to notice a difference and have to start looking at the finer details, hence the mental workload comment. Often times I'll notice an eye not facing the right direction or certain lines appearing too "repetitive", something I rarely see in the works of human artists. It's difficult to explain without actual inage examples in front of me.

latexr•1h ago
> To me AI generated art without repeated major human interventions is almost immediately obvious.

You can’t know that for sure. It’s the toupée fallacy.

https://rationalwiki.org/wiki/Toupee_fallacy

ants_everywhere•1h ago
All the current active chess players learned by playing the computer repeatedly.

So what the human is achieving in this case is having been trained by AI.

rambambram•1h ago
> As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!

Today I learned: LLMs and their presence in society eventually force one to producing/crafting/making/creating for fun instead of consuming for fun.

All jokes aside, you got the solution here. ;)

meindnoch•2h ago
An olympic weightlifter doing clean and jerk with 150kg is worthy of my attention. A Komatsu forklift doing the same is not.
ta8645•1h ago
> A Komatsu forklift doing the same is not ... [worthy of attention]

It is, if you're managing a warehouse; then it's a wonderful marvel. And it is a hidden benefit to everyone who receives cheaper products from that warehouse. Nobody cares if it's a human or the Komatsu doing the heavy lifting.

teddy-smith•1h ago
Well warehouse mangers excluded....
latexr•1h ago
You just made me realise why many people have trouble with analogies, to the point it seems they are arguing in bad faith. You have to consider the context the analogy is being applied to.

It is patently obvious (though clearly not to every one) that the person you’re replying to is describing a situation of seeing a human weightlifter VS a mechanical forklift doing the same in a contest, for entertainment. The analogy works as a good example because it maps to the original post about art.

When you change the setting to a warehouse, you are completely ignoring all the context and trying to undermine the comment on a technicality which doesn’t relate at all to the point. If you want to engage with the analogy properly, you have to keep the original art context in mind at all times.

ta8645•12m ago
And you're failing to understand that people can understand the analogy, and think that it fails to capture the entire situation, and so extend the analogy to make it obvious (although clearly not to everyone) that the analogy is lacking, and not very convincing.
npteljes•13m ago
I think this is actually a good counterpoint to something that OP missed. It's not that it's not great that a Komatsu can also do it. Both are great. But we need to have the appropriate expectations, to end up with the feeling of appreciation. In the AI case, the art looks like "human art", and often it's also presented as such. Then, learning that actually AI did it is akin to betrayal. But actually people like to appreciate a whole lot of artful things that people didn't, or only partially "did": electronic music, the sounds and visuals of nature, emergent behavior of things like the game of life, visual output of algorithms, and so on.
aDyslecticCrow•1h ago
Much of what makes art fun is human effort and show of skill.

People post AI art to take credit for being a skilled artist, just like people posting others art as their own. Its lame.

If I am to be a bit controversial among artists; we're exposed to so much good art today that most art posted online is "average" at best. (The bar is so high that it takes 20+ years to become above average for most)

Its average even if a human posted it but fun because a human spent effort making something cool. When an ai generates average art its ... just average art. Scrolling google images to look at art is also pretty dull, because its devoid of the human behind.

latexr•23m ago
To continue your point, following the human doing it is also infinitely more rewarding because you can witness their progress.
bit1993•57m ago
A human artist puts in work and passion to create beautiful art from almost nothing. It brings them joy that their art brings someone joy. Every art piece has a story behind it, sharing their art with others gives them motivations to not only continue doing it and bless the world with more art but it also gives them feedback that yes this art is liked by someone out there. This feedback loop is part of what creates healthy civilizations.
nnf•46m ago
For the same reason dealing in counterfeit money matters — just because I can't tell it's fake doesn't mean the person I try to pay won't know or care. If your reputation is your currency, you don't want to damage it by promoting artwork that other people know is AI generated, so it's likely better to play it safe.
npteljes•20m ago
One of the reasons why people react so badly to AI art is because they encounter it in a context that implies human art. Then the discovery becomes treachery, a breach of trust. Not too much unlike having sex lovingly, only to discover that there was no love at all. Or people being nice to someone, but not meaning it, and them finding this out.

It's about implications, and trust. Note how AI art is thriving on platforms where it's clearly marked as such. People then can go into it by not having the "hand crafted" expectation, and enjoying it fully for what it is. AI-enabling subreddits and Pixiv comes to mind for example.

silvestrov•2h ago
> charging a fee [...] rather hostile way for an Open Source project that aims to be as open and available as possible

The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.

Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.

omnicognate•1h ago
This is because Apple software is perfect by definition. Any perceived bug is an example of someone failing to use the software correctly. Bug reports are records of user incompetence, whose only purpose is to be ritually mocked in morale-enhancing genius confirmation sessions.
latexr•59m ago
> You are really lucky if you get any kind of feedback from Apple.

Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.

IAmLiterallyAB•2h ago
Minimum reputation to submit might help
__bjoernd•1h ago
How do I gather reputation to submit without being able to submit something?
the_snooze•38m ago
Probably the same as in any other high-trust human interaction: you have to have someone on the inside introduce you and vouch for you.
caioluders•2h ago
Make a private program with monetary rewards and a public program without. Invite only verified researchers.
spydum•1h ago
Right? I thought the value of these vuln programs like hackerone and bugbounty would be you could use the submitters reputation to filter the noise? Don't want to accept low quality submissions from new or low experience reports? Turn the knob up..
placardloop•2h ago
These AI reports are just an acceleration of the slop created by similar human “researchers”. The real root cause of this is that most security “professionals” have been trained to do the bare minimum of work and expect a payday from it.

There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.

We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.

There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.

conartist6•1h ago
Oh yes. AI has nothing to do with it! It is Totally Outrageous and Unexpected that AI would be abused to spew a lot of low value crap.

Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.

conartist6•1h ago
It is at the same time being used to tear down faith in democracy, all open content in the Internet, workers' autonomy, and generally serving to attempt to make all thought derivative while minimizing incentives to create anything new that isn't an AI
jdefr89•1h ago
Professional Vulnerability Researcher here... You are correct. Over the years this industry has seen an influx of script kiddies who do nothing but run tools. It's sad but I really think this field needs more gate keeping...
whatevsmate•1h ago
How about only sending submissions to humans if they include a reproducible test case? Actual compilable source code + payload that reproduces an attack. Would this be too easily gamed by security researchers as well?
ants_everywhere•1h ago
The obvious way forward is to have AI do an initial vet, and ideally create an exploit before a human reviews.
anthonyryan1•1h ago
As the only developer maintaining a big bounty program. I believe they are all trending downward.

I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.

So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.

I think that speaks volumes about how much time goes into the actual discoveries.

Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.

Aachen•1h ago
Similarly from a hacker's point of view, I also think vulnerability reporting is in a downwards spiral. Particularly the ones organised through a platform like this just aren't reaching the right people. It used to be pgp email to whoever needs to know of it and that worked great. I have no idea if it still would today for you guys, but from my point of view it's the only reliable way to reach a human who cares about the product and not someone whose job it is to refuse bounties. I don't want bounties, I've got a day job as security consultant for that, I'm just reporting what I stumble across. Chocolate and handwritten notes are nice, but primarily I want developers and sysadmins to fix their damn software
armchairhacker•1h ago
You could charge a fee and give the money back if the report is wrong but seems well-intentioned.

I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).

However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.

jannes•1h ago
This is probably something that the platform HackerOne should implement. It can't be addressed on the project level.

https://hackerone.com/curl/hacktivity

Aachen•1h ago
Why?

I don't know if the link you posted answers the question, I get a blocked page ("You are visiting this page because we detected an unsupported browser"). You'd think a chromium-based browser would be supported but even that isn't good enough. I love open standards like html and http...

Edit: just noticed it goes to hackerone and not curl's own website. Of course they'd say curl can't solve payments on their own

latexr•1h ago
> You could charge a fee and give the money back if the report is wrong but seems well-intentioned.

That idea was considered and rejected in the article:

> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

Dilettante_•1h ago
Takes all the self-control I have not to make a crude joke about the title
jdefr89•1h ago
Some of these AI slop report exchanges are absolutely hilarious. Love seeing people caught red handed then trying to play it off.. This is why Vulnerability Research needs more gatekeeping.
IsTom•1h ago
You could require that submissions include an expletive or anything else that LLMs are sanitized to not produce. With how lazy these people are that ought to filter out at least some of them.
bit1993•1h ago
AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better.

This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.

Aurornis•1h ago
There’s more to this story than AI slop:

> The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)

Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.

They heard that open source contributions gets jobs, so they spam contributions to famous projects.

Expurple•41m ago
Permissively-licensed projects (which is the majority or FOSS projects out there) could always be re-licensed to proprietary. I publish most of my code under permissive licences and will continue doing that. LLM training doesn't really change anything for me
pinebox•42m ago
Maybe a curl Patreon for would-be H1 contributors? Just need to figure out a donation amount that is trivial for legitimate security researchers, but too rich for spammers.
jgb1984•29m ago
LLM are a net negative on society on so many levels.
soyyo•28m ago
Of the 21 reports included as an example i have looked at number two, Buffer Overflow Vulnerability in WebSocket Handling #2298307

The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.

It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.