This was like two weeks ago. These things suck.
Yes. Unfortunately, some companies seem to pay out the bug bounty without even verifying that the report is actually valid. This can be seen on the "reporter"'s profile: https://hackerone.com/evilginx
I wonder if you could use AI to classify the probability factor that something is AI bullshit and deprioritize it?
I wonder if reputation systems might work here - you could give anyone who id's with an AML/KYC provider some reputation, enough for two or three reports, let people earn reputation digging through zero rep submissions and give someone like 10,000 reputation for each accurate vulnerability found, and 100s for any accurate promoted vulnerabilities. This would let people interact anonymously if they want to edit, quickly if they found something important and are willing to AML/KYC, and privilege quality people.
Either way, AI is definitely changing economics of this stuff, in this case enshittifying first.
Personally I can't imagine how miserable it would be for my hard-earned expertise to be relegated to sifting through SLOP where maybe 1 in hundreds or even thousands of inquiries is worth any time at all. But it also doesn't seem prudent to just ignore them.
I don't think better ML/AI technology or better information systems will make a significant difference on this issue. It's fundamentally about trust in people.
I don't know where the limit would go.
> I feel like the problem seems to me to be behavior, not a technology issue.
Yes, it's a behavior issue, but that doesn't mean it can't be solved or at least minimized by technology, particularly as a technology is what's exacerbating the issue?
> It's fundamentally about trust in people.
Who is lacking trust in who here?
This alignment problem between responding with what the user wants (e.g. a security report, flattering responses) and going against the user seems a major problem limiting the effectiveness of such systems.
Well the reporter in the report that stated it that they are open for employment https://hackerone.com/reports/3125832 Anyone want to hire them? They can play with ChatGPT all day and spam random projects with the AI slop.
Looking at one of the bogus reports, it doesn't even seem like a real person. Why do this if you're not trying to gain recognition?
They're doing it for money, a handful of their reports did result in payouts. Those reports aren't public though so there's no way to know if they actually found real bugs, or the reviewer rubber stamped them without doing their due diligence.
Anything for linkedin, a light interface that doesn't required logging in?
I pretty much stopped going to linkedin years ago because they started aggressively directing a person to login. I was shocked this post works without login. I don't know if that is how it has always been, or if that is a recent change, or what. It would be nice to have alternative interfaces.
In case some people are getting gated here is their post:
===
Daniel Stenberg curl CEO. Code Emitting Organism
That's it. I've had it. I'm putting my foot down on this craziness.
1. Every reporter submitting security reports on #Hackerone for #curl now needs to answer this question:
"Did you use an AI to find the problem or generate this submission?"
(and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)
2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.
We still have not seen a single valid security report done with AI help.
---
This is the latest one that really pushed me over the limit: https://hackerone.com/reports/3125832
===
I just opened the site with JS off on mobile. No issues.
https://blog.bismuth.sh/blog/bismuth-found-the-atop-bug
https://www.cve.org/CVERecord?id=CVE-2025-31160
The amount of bad reports curl in particular has gotten is staggering and it's all from people who have no background just latching onto a tool that won't elevate them.
AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
jacksnipe•1h ago
x3n0ph3n3•1h ago
esafak•1h ago
If you're just parroting what you read, what is it that you do here?!
giantg2•1h ago
tough•1h ago
giantg2•55m ago
hashmush•1h ago
- I had to Google it...
- According to a StackOverflow answer...
- Person X told me about this nice trick...
- etc.
Stating your sources should surely not be a bad thing, no?
nraynaud•1h ago
spiffyk•1h ago
gruez•1h ago
I don't think I've ever seen anyone lambasted for citing stackoverflow as a source. At best, they chastised for not reading the comments, but nowhere as much pushback as for LLMs.
comex•1h ago
Also, using Stack Overflow correctly requires more critical thinking. You have to determine whether any given question-and-answer is actually relevant to your problem, rather than just pasting in your code and seeing what the LLM says. Requiring more work is not inherently a good thing, but it does mean that if you’re citing Stack Overflow, you probably have a somewhat better understanding of whatever you’re citing it for than if you cited an LLM.
spiffyk•1h ago
mynameisvlad•1h ago
If anything, SO having verified answers helps its credibility slightly compared to a LLM which are all known to regularly hallucinate (see: literally this post).
bloppe•1h ago
dpoloncsak•50m ago
"Hey, I didn't study this, I found it on Google. Take it with a grain of caution, as it came from the internet" has been shortened to "I googled it and...", which is now evolving to "Hey, I asked chatGPT, and...."
hx8•1h ago
Copy and pasting from ChatGPT has the same consequences as copying and pasting from StackOverflow, which is to say you're now on the hook supporting code in production that you don't understand.
tough•1h ago
I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.
Just shows off how much you care about the topic at hand, no?
multjoy•1h ago
tough•53m ago
multjoy•33m ago
If you don't know anything about the subject area, how do you know if you are asking the right questions?
theamk•56m ago
Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.
(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)
misnome•53m ago
tough•49m ago
All marketing departments are trying to manipulate you to buy their thing, it should be illegal.
But just testing out this new stuff and seeing what's useful for you (or not) is usually the way
jacksnipe•45m ago
layer8•25m ago
stonemetal12•57m ago
mentalpiracy•24m ago
rhizome•19m ago
yoyohello13•1h ago
This is kind of the same with any AI gen art. Like I can go generate a bunch of cool images with AI too, why should I give a shit about your random Midjourney output.
h4ck_th3_pl4n3t•1h ago
They have to prove to someone that they're worth their money. /s
alwa•14m ago
It took a solid hundred years to legitimate photography as an artistic medium, right? To the extent that the controversy still isn’t entirely dead?
Any cool images I ask AI for are going to involve a lot less patience and refinement than some of these things the kids are using AI to turn out…
For that matter, I’ve watched friends try to ask for factual information from LLMs and found myself screaming inwardly at how vague and counterproductive their style of questioning was. They can’t figure out why I get results I find useful while they get back a wall of hedging and waffling.
evandrofisico•55m ago
jsheard•52m ago
pixl97•51m ago
mcny•43m ago
colecut•36m ago
pixl97•12m ago
It seems the initial rule seems rather worthless.
layer8•28m ago
leptons•5m ago
You know how I know the difference between something an AI wrote and something a human wrote? The AI knows the difference between "to" and "too".
I guess you proved your point.
ModernMech•50m ago
layer8•43m ago
ModernMech•40m ago
Yes is true there could have been a skill issue. But it could also be true that the person just wanted input from people rather than Google. So that's why I drew the connection.
layer8•37m ago
ModernMech•26m ago
layer8•11m ago
jacksnipe•36m ago
soulofmischief•9m ago
cogman10•50m ago
Meaning, instead of listening to a real-life expert in the company telling them how to handle the problem they ignored my advice and instead dumped the garbage from GPT.
I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.
delusional•49m ago
layer8•46m ago
I’m saying this tongue in cheek, but there’s some truth to it.
colechristensen•31m ago
Let's just say not listening to someone and then complaining that doing something else didn't work isn't exactly new.
colechristensen•22m ago
Oh but it is, used wisely.
One: it's a replacement for googling a problem and much faster. Instead of spending half an hour or half a day digging through bug reports, forum posts, and stack overflow for the solution to a problem. LLMs are a lot faster, occasionally correct, and very often at least rather close.
Two: it's a replacement for learning how to do something I don't want to learn how to do. Case Study: I have to create a decent-enough looking static error page for a website. I could do an awful job with my existing knowledge, I could spend half a day relearning and tweaking CSS, elements, etc. etc. or I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.
LLMs are not a replacement for real understanding, for digging into a codebase to really get to the core of a problem, or for becoming an expert in something, but in many cases I do not want to, and moreover it is a poor use of my time. Plenty of things are not my core competence or anywhere near the goals I'm trying to achieve. I just need a quick solution for a topic I'm not interested in.
candiddevmike•48m ago
Seems like if all you do is forward questions to LLMs, maybe you CAN be replaced by a LLM.
mrkurt•43m ago
If they're saying it to you, why wouldn't you assume they understand and trust what they came up with?
Do you need people to start with "I understand and believe and trust what I'm about to show you ..."?
jacksnipe•33m ago
laweijfmvo•29m ago
JohnFen•9m ago
"I asked X and it said..." is an appeal to authority and suspect on its face whether or not X is an LLM. But when it's an LLM, then it's even worse. Presumably, the reason for the appeal is because the person using it considers the LLM to be an authoritative or meaningful source. That makes me question the competence of the person saying it.