frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

cURL removes bug bounties

https://etn.se/index.php/nyheter/72808-curl-removes-bug-bounties.html
155•jnord•2h ago

Comments

eknkc•1h ago
A list of the slop if anyone is interested:

https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...

shusaku•1h ago
> To replicate the issue, I have searched in the Bard about this vulnerability.

Seeing Bard mentioned as an LLM takes me back :)

golem14•1h ago
I looked at two reports, and I can’t tell if the reports are directly from an ai or some very junior student not really understanding security. LLms to me sound generally more convincing.
mirekrusin•48m ago
Some (most?) are llm chat copy paste addressing non existing users in conversations like [0] - what a waste of time.

[0] https://hackerone.com/reports/2298307

plastic041•1h ago
In the second report, Daniel greeted the slopper very kindly and tried to start a conversation with them. But the slopper calls him by the completely wrong name. And this was December 2023. It must have been extremely tiring.
johncoltrane•1h ago
> slopper

First new word of 2026. Thank you.

andrewflnr•50m ago
Slop-monger is the term I've seen, and the more evocative one I think.
ares623•48m ago
Sloperator
kakacik•24m ago
Slopster
OsrsNeedsf2P•1h ago
Honestly infuriating to read. I'm so surprised cURL put up with this for so long
worldsavior•1h ago
All of those reports are clearly AI and it's weird seeing the staff not recognizing it as AI and being serious.
ares623•1h ago
Orc, meet hobbits.
potatoproduct•42m ago
I thought the same, except I realised some of the reports were submitted back in 2023 before AI slop exploded.
dlcarrier•1h ago
An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

Then again, I once submitted a bug report to my bank, because the login method could be switched from password+pin to pin only, when not logged in, and they closed it as "works as intended", because they had decided that an optional password was more convenient than a required password. (And that's not even getting into the difference between real two-factor authentication the some-factor one-and-a-half-times they had implemented by adding a PIN to a password login.) I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.

Assuming the host of the bug bounty program is operating in good faith, adding some kind of barrier to entry or punishment for untested entries will weed out submitters acting in bad faith.

gamer191•1h ago
Agreed, although the reimbursement should be based on whether a reasonable person could consider that to be a vulnerability. Often it’s tricky for outsiders to tell whether a behaviour is expected or a vulnerability
fredrikholm•1h ago
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

I refer to this as the Notion-to-Confluence cost border.

When Notion first came out, it was snappy and easy to use. Creating a page being essentially free of effort, you very quickly had thousands of them, mostly useless.

Confluence, at least in west EU, is offensively slow. The thought of adding a page is sufficiently demoralizing that it's easier to update an existing page and save yourself minutes of request time outs. Consequently, there's some ~20 pages even in large companies.

I'm not saying that sleep(15 * SECOND) is the way to counter, but once something becomes very easy to do at scale, it explodes to the point where the original utility is now lost in a sea of noise.

teekert•1h ago
It strange how sensitive humans are to these sort of relative perceived efforts. Having a charged, cordless vacuum cleaner ready to go and take around the house has also changed our vacuuming game. Because carrying a big unwieldy vacuum cleaner and needing to find a power socket at every location just feels like much more effort. Even though it really isn't.
jraph•1h ago
> Consequently, there's some ~20 pages even in large companies.

As someone working on Confluence to XWiki migration tools, I wish this was remotely true, my life would be way easier (and probably more boring :-)).

arionmiles•51m ago
I find this to be a very amusing critique. In my experience, Notion (when I stopped using it 3 years ago) was slow as molasses. Slow to load, slow to update. In comparison, at work, I almost exclusively favor Confluence Cloud. It's very responsive for me.

We have tons of Confluence wikis, updated frequently.

nospice•1h ago
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

The problem is that bug bounty slop works. A lot of companies with second-tier bug bounties outsource triage to contractors (there's an entire industry built around that). If a report looks plausible, the contractor files a bug. The engineers who receive the report are often not qualified to debate exploitability, so they just make the suggested fix and move on. The reporter gets credit or a token payout. Everyone is happy.

Unless you have a top-notch security team with a lot of time on their hands, pushing back is not in your interest. If you keep getting into fights with reporters, you'll eventually get it wrong and you're gonna get derided on HN and get headlines about how you don't take security seriously.

In this model, it doesn't matter if you require a deposit, because on average, bogus reports still pay off. You also create an interesting problem that a sketchy vendor can hold the reporter's money hostage if the reporter doesn't agree to unreasonable terms.

zrm•45m ago
Triage gets outsourced because the quality of reports is low.

If filing a bad report costs money, low quality reports go down. Meanwhile anyone still doing it is funding your top notch security team because then they can thoroughly investigate the report and if it turns out to be nothing then the reporter ends up paying them for their time.

notpushkin•42m ago
I don’t think it works for curl though. You would guess that sloperators would figure out that their reports aren’t going through with curl specifically (because, well, people are actually looking into them and can call bullshit), and move on.

For some reason they either didn’t notice (e.g. there’s just too many people trying to get in on it), or did notice, but decided they don’t care. Deposit should help here: companies probably will not do it, so when you see a project requires a deposit, you’ll probably stop and think about it.

saghm•1h ago
That anecdote is hilarious and scary in equal measures. Optional passwords are certainly more convenient than required ones, but so are optional PINs. The most convenient UX would be never needing to log in at all! Unless you find it inconvenient for others to have access to your bank account of course
sersi•1h ago
I really hate the current trend of not having passwords. For example perplexity doesn't have a password, just an email verification to login.
eXpl0it3r•43m ago
I hate this as well, especially since I have greylisting enabled on some email addresses, so by the time the email login is delivered, the login session has already timed out and of course the sender uses different mail servers everytime. So in some cases, it's nearly impossible to login and takes minutes...
6510•35m ago
Long long ago the google toolbar queries could be reverse engineered to do an i feel lucky search on gmail. I created a login that (if @gmail.com) forwarded to the specific mail.

Unlikely to happen but it seems fun to extend email [clients] with uri's. It is just a document browser, who cares how they are delivered.

bawolff•51m ago
Bug bounties often involve a lot of risk for submitters. Often the person reading the report doesn't know that much and misinterprets it. Often rules are unclear about what sort of reports are wanted. A pay to enter would increase that risk.

Honestly bug bounties are kind of miserable for both sides. I've worked on the recieving side of bug bounty programs. You wouldnt believe the shit that is submitted. This was before AI and it was significant work to sort through, i can only imagine what its like now. On the other hand for a submitter, you are essentially working on spec with no garuntee your work is going to be evaluated fairly. Even if it is, you are rolling the dice that your report is not a duplicate of an issue reported 10 years ago that the company just doesn't feel like fixing.

ANarrativeApe•40m ago
Pay to enter would increase the risk of submitting a bug report. However, if the submission fees were added to the bounty payable, then the risk reward changes in favour of the submitter of genuine bugs. You could even have refund the submission fee in the case of a good faith non bug submission. A little game theory can go a long way in improving the bug bounty system...
CTDOCodebases•30m ago
They could allow submitters to double down on submissions escalating the bug to more skilled and experienced code reviewers who get a cut of the doubled submission fee for reviews.
bawolff•28m ago
If a competent neutral party was evaluating them, i would agree. However currently these things tend to be luck of a draw.
eterm•40m ago
Indeed, increasing the incentive for companies to reject ( and then sometimes silently fix anyway ) even the valid reports would only increase further misery for everyone.
dmurray•27m ago
cURL would operate such a program in good faith, and quickly earn the trust of the people who submit the kind of bug reports cURL values.

Your bank would not. Nor would mine, or most retail banks.

If the upfront cost would genuinely put off potential submitters, a cottage industry would spring up of hackers who would front you the money in return for a cut if your bug looked good. If that seems gross, it's really not - they end up doing bug triage for the project, which is something any software company would be happy to pay people for.

sudahtigabulan•22m ago
> I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.

Sadly, yeah. And will do anything only if they believe they can actually be caught.

An EU-wide bank I used to be customer of until recently, supported login with Qualified Electronic Signatures, but only if your dongle supports... SHA-1. Mine didn't. It's been deprecated at least a decade ago.

A government-certified identity provider made software that supposedly allowed you to have multiple such electronic signatures plugged in, presenting them in a list, but if one of them happened to be a YubiKey... crash. YubiKey conforms to the same standard as the PIV modules they sold, but the developers made some assumptions beyond the standard. I just wanted their software not to crash while my YubiKey is plugged in. I reported it, and they replied that it's not their problem.

plastic041•1h ago
related: cURL stopped HackerOne bug bounty program due to excessive slop reports https://news.ycombinator.com/item?id=46678710
novalis78•1h ago
Just use an LLM to weed them out. What’s so hard about that?
eqvinox•1h ago
At this point it's impossible to tell if this is sarcasm or not.

Brave new world we got there.

vee-kay•1h ago
Set a thief to catch a thief.
bootsmann•1h ago
If AI can't be trusted to write bug reports, why should it be trusted to review them?
GalaxyNova•1h ago
Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.
colechristensen•19m ago
I don't know, I prompted Opus 4.5 "Tell me the reasons why this report is stupid" on one of the example slop reports and it returned a list of pretty good answers.[1]

Give it a presumption of guilt and tell it to make a list, and an LLM can do a pretty good job of judging crap. You could very easily rig up a system to give this "why is it stupid" report and then grade the reports and only let humans see the ones that get better than a B+.

If you give them the right structure I've found LLMs to be much better at judging things than creating them.

Opus' judgement in the end:

"This is a textbook example of someone running a sanitizer, seeing output, and filing a report without understanding what they found."

1. https://claude.ai/share/8c96f19a-cf9b-4537-b663-b1cb771bfe3f

nprateem•5m ago
And if you ask why it's accurate it'll spaff out another list of pretty convincing answers.
imiric•3m ago
"Tell me the reasons why this report is stupid" is a loaded prompt. The tool will generate whatever output pattern matches it, including hallucinating it. You can get wildly different output if you prompt it "Tell me the reasons why this report is great".

It's the same as if you searched the web for a specific conclusion. You will get matches for it regardless of how insane it is, leading you to believe it is correct. LLMs take this to another level, since they can generate patterns not previously found in their training data, and the output seems credible on the surface.

Trusting the output of an LLM to determine the veracity of a piece of text is a baffilingly bad idea.

f311a•1h ago
How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.

The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.

ChrisArchitect•1h ago
Previously: https://news.ycombinator.com/item?id=46617410

https://news.ycombinator.com/item?id=46678710

jameslk•1h ago
It seems open source loses the most from AI. Open source code trained the models, the models are being used to spam open source projects anywhere there's incentive, they can be used to chip away at open source business models by implementing paid features and providing the support, and eventually perhaps AI simply replaces most open source code
giancarlostoro•1h ago
I wouldn't say open source code solely trained the models, surely there are CS courses and textbooks, official documentation as well as transcripts of talks and courses all factor in as well.

On another note, regarding AI replacing most open source code. I forget what tool it was, but I had a need for a very niche way of accessing an old Android device it was rooted, but if I used something like Disk Drill it would eventually crap out empty files. So I found a GUI someone made, and started asking Claude to add things I needed for it to a) let me preview directories it was seeing and b) let me sudo up, and let me download with a reasonable delay (1s I think) which basically worked, I never had issues again, it was a little slow to recover old photos, but oh well.

I debated pushing the code changes back into github, it works as expected, but it drifted from the maintainers own goals I'm sure.

ValveFan6969•1h ago
"open source" and "business model" in the same sentence... next you're gonna tell me to eat pudding with a fork.
jameslk•55m ago
https://en.wikipedia.org/wiki/Business_models_for_open-sourc...

I think you should try eating pudding with a fork next

bigstrat2003•50m ago
I mean... not what the other poster meant, but https://en.wikipedia.org/wiki/Sticky_toffee_pudding exists and is absolutely delicious.
jameslk•19m ago
Flan is also a type of pudding (milk/eggs base) which can be ate with a fork. Other baked custards too
robin_reala•21m ago
You’d hardly eat black pudding with a spoon. https://en.wikipedia.org/wiki/Black_pudding
Aeglaecia•52m ago
i believe that the existence of not for profit organizations is a valid counterpoint to whatever your argument is
Grollicus•49m ago
Just leaving this here: https://en.wikipedia.org/wiki/Pudding_mit_Gabel
bawolff•49m ago
> they can be used to chip away at open source business models by implementing paid features and providing the support

There are a lot of things to be sad about AI, but this is not it. Nobody has a right to a business model, especially one that assumes nobody will compete with you. If your business model relies on the rest of the world bring sucky so you can sell some value-added to open-core software, i'm happy when it fails.

sevenzero•9m ago
Competition is extremely important yes. But not the kind of competition, backed by companies that have much bigger monetary assets, to overwhelm projects based on community effort just to trample it down. The FFMPEG Google stuff as an example.
anileated•4m ago
When LLMs are based on stolen work and violate GPL terms, which should be already illegal, it's very much okay to be furious about the fact that they additionally ruin respective business models of open source, thanks to which they are possible in the guest place.
shubhamjain•26m ago
I feel AI will have the same effect degrading Internet as social media did. This flood of dumb PRs, issues is one symptom of it. Other is AI accelerating the trend which TikTok started—short, shallow, low-effort content.

It's a shame since this technology is brilliant. But every tech company has drank the “AI is the future” Kool-aid, which means no one has incentive to seriously push back against the flood of low-effort, AI-generated slop. So, it's going to be race to the bottom for a while.

sevenzero•4m ago
It'll stop soonish. The industry is now financed by debt rather than monetary assets that actually exist. Tons of companies see zero gain from AI as its reported repeatedly here on HN. So all the LLM vendors will eventually have to enshittify their products (most likely through ads, shorter token windows, higher pricing and whatnot). As of now, not a sustainable business model thankfully. The only sad part is that this debt will hit the poorest people most.
ares623•1h ago
Alternate headline: AI discovering so many exploits that cybersecurity can't keep up

Am I doing this right?

bawolff•57m ago
There is a difference between AI discovering real vulnerabilities (e.g. the ffmpeg situation), and AI being used to spam fake vulnerabilities
potatoproduct•41m ago
It's easy to discover an exploit when you're hallucinating:)
bilekas•1h ago
I just read one of the slop submissions and it's baffling how anyone could submit these with a straight face.

https://hackerone.com/reports/3293884

Not even understanding the expected behaviour and then throwing as much slop as possible to see what sticks is the problem with generative AI.

Snakes3727•1h ago
The company I work for has a pretty bad bounty system (basically a security@corp email). We have a demo system and a public API with docs. We get around 100 or more emails a day now. Most of it is slop, scams, or my new favourite AI security companies sending us an AI generated pentest un prompted filled with false positives, untrue things, etc. It has become completely useless so no one looks at it.

I had a sales rep even call me up basically trying to book a 3 hour session to review the AI findings unprompted. When I looked at the nearly 250 page report, and saw a critical IIS bug for Windows server (doesn't exist) existing at a scanned IP address of 5xx.x.x.x (yes an impossible IP) publically available in AWS (we exclusively use gcp) I said some very choice words.

nottorp•50m ago
What I wonder is if this will actually reduce the amount of slop.

Bounties are a motivation, but there's also promotional purposes. Show that you submitted thousands of security reports to major open source software and you're suddenly a security expert.

Remember the little iot thing that got on here because of a security report complaining, among other things, that the linux on it did not use systemd?

bawolff•46m ago
I dont think bounties make you an "expert". If you want to be deemed an expert, write blogs detailing how the exploit works. You can do that without a bounty.

In many ways one of the biggest benefits of bug bounties is having a dedicated place where you can submit reports and you know the person on the other end wants them and isn't going to threaten to sue you.

For the most part, the money in a bug bounty isn't work the effort needed to actually find stuff. The exception seens to be when you find some basic bug, that you can automate scan half the internet and submit to 100 different bug bounties.

nottorp•41m ago
> I dont think bounties make you an "expert".

It depends to who.

> If you want to be deemed an expert, write blogs detailing how the exploit works.

That's necessary if you sell your services to people likely to enjoy HN.

Springtime•16m ago
Outside of direct monetary gain like bounties are efforts to just stand out, in terms of being able to show contributions to a large project or getting say a CVE.

Stenberg has actually written about invalid/wildly overrated vulnerabilities that get assigned CVEs on their blog a few times and those were made by humans. I often get the sense some of these aren't just misguided reporters but deliberate attempts to make mountains out of molehills for reputation reasons. Things like this seem harder to account for as an incentive.

arjie•15m ago
It makes sense. This process of searching for bugs was slow and time-consuming so it needed to be incentivized. This is no longer the case. Now the hard part is in identifying which ones are real.

To paraphrase a famous quote: AI-equipped bug hunters find 100 out of every 3 serious vulnerabilities.

doe88•4m ago
I love the En dashes in the lead of the article, made me doubt of the article for a few seconds.

Anthropic's original take home assignment open sourced

https://github.com/anthropics/original_performance_takehome
261•myahio•5h ago•110 comments

The Agentic AI Handbook: Production-Ready Patterns

https://www.nibzard.com/agentic-handbook
49•SouravInsights•1h ago•9 comments

200 MB RAM FreeBSD Desktop

https://vermaden.wordpress.com/2026/01/18/200-mb-ram-freebsd-desktop/
34•vermaden•2d ago•14 comments

cURL removes bug bounties

https://etn.se/index.php/nyheter/72808-curl-removes-bug-bounties.html
155•jnord•2h ago•69 comments

A 26,000-year astronomical monument hidden in plain sight (2019)

https://longnow.org/ideas/the-26000-year-astronomical-monument-hidden-in-plain-sight/
457•mkmk•14h ago•90 comments

Libbbf: Bound Book Format, A high-performance container for comics and manga

https://github.com/ef1500/libbbf
47•zdw•4h ago•23 comments

Instabridge has acquired Nova Launcher

https://novalauncher.com/nova-is-here-to-stay
194•KORraN•13h ago•125 comments

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

https://github.com/mastra-ai/mastra
158•calcsam•16h ago•49 comments

Infracost (YC W21) Is Hiring Sr Back End Eng (Node.js+SQL) to Shift FinOps Left

https://www.ycombinator.com/companies/infracost/jobs/Sr9rmHs-senior-backend-engineer-node-js-sql
1•akh•1h ago

RSS.Social – the latest and best from small sites across the web

https://rss.social/
20•Curiositry•6h ago•4 comments

The GDB JIT Interface

https://bernsteinbear.com/blog/gdb-jit/
35•surprisetalk•4d ago•3 comments

Which AI Lies Best? A game theory classic designed by John Nash

https://so-long-sucker.vercel.app/
118•lout332•10h ago•53 comments

California is free of drought for the first time in 25 years

https://www.latimes.com/california/story/2026-01-09/california-has-no-areas-of-dryness-first-time...
373•thnaks•10h ago•181 comments

Are arrays functions?

https://futhark-lang.org/blog/2026-01-16-are-arrays-functions.html
118•todsacerdoti•2d ago•81 comments

The Unix Pipe Card Game

https://punkx.org/unix-pipe-game/
216•kykeonaut•15h ago•67 comments

IPv6 is not insecure because it lacks a NAT

https://www.johnmaguire.me/blog/ipv6-is-not-insecure-because-it-lacks-nat/
155•johnmaguire•13h ago•222 comments

Unconventional PostgreSQL Optimizations

https://hakibenita.com/postgresql-unconventional-optimizations
344•haki•18h ago•53 comments

The space and motion of communicating agents (2008) [pdf]

https://www.cl.cam.ac.uk/archive/rm135/Bigraphs-draft.pdf
37•dhorthy•5d ago•5 comments

The challenges of soft delete

https://atlas9.dev/blog/soft-delete.html
139•buchanae•11h ago•81 comments

Show HN: Agent Skills Leaderboard

https://skills.sh
89•andrewqu•11h ago•31 comments

Lunar Radio Telescope to Unlock Cosmic Mysteries

https://spectrum.ieee.org/lunar-radio-telescope
41•rbanffy•10h ago•3 comments

Our approach to age prediction

https://openai.com/index/our-approach-to-age-prediction/
100•pretext•13h ago•166 comments

Maintenance: Of Everything, Part One

https://press.stripe.com/maintenance-part-one
111•mitchbob•13h ago•19 comments

Building Robust Helm Charts

https://www.willmunn.xyz/devops/helm/kubernetes/2026/01/17/building-robust-helm-charts.html
61•will_munn•1d ago•4 comments

Ask HN: Do you have any evidence that agentic coding works?

218•terabytest•19h ago•199 comments

Proof of Concept to Test Humanoid Robots

https://thehumanoid.ai/humanoid-and-siemens-completed-a-proof-of-concept-to-test-humanoidrobots-i...
14•0xedb•5d ago•16 comments

Who owns Rudolph's nose?

https://creativelawcenter.com/copyright-rudolph-reindeer/
38•ohjeez•8h ago•15 comments

Disaster planning for regular folks (2015)

https://lcamtuf.coredump.cx/prep/index-old.shtml
109•AlphaWeaver•5h ago•65 comments

Provably unmasking malicious behavior through execution traces

https://arxiv.org/abs/2512.13821
40•PaulHoule•10h ago•5 comments

Apples, Trees, and Quasimodes

https://systemstack.dev/2025/09/humane-computing/
45•entaloneralie•3d ago•2 comments