frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Are 'toxic' personality traits useful test cases for AI or behavioral models?

https://github.com/FlDanyT/ai-celebrity-models
1•yakalmar2048•1m ago•1 comments

LiveContainer: Run iOS apps without installing them

https://github.com/LiveContainer/LiveContainer
1•handfuloflight•1m ago•0 comments

DragonSweeper: A minesweeper game that requires observation

https://dragonsweeper.org
1•wslh•2m ago•0 comments

WebRTC VPN Tunnel

https://github.com/Manav1011/webrtc-vpn
1•walterbell•4m ago•0 comments

DiffRatio – A One-Step Diffusion Model with SOTA quality and 50% less memory

https://www.arxiv.org/pdf/2502.08005
2•LoMoGan•5m ago•1 comments

The Issue with Special Issues: When Guest Editors Publish in Support of Self

https://arxiv.org/abs/2601.07563
1•wslh•6m ago•0 comments

Amazon Joins the Big-Box League with Its Largest-Ever Store

https://www.wsj.com/business/retail/amazon-orland-park-illinois-opening-13362c97
1•divbzero•10m ago•0 comments

When I Talk to AI About My Feelings, I Don't Want a Therapy Ad

https://www.theverge.com/news/864103/mixed-messaging
1•thor1122•11m ago•0 comments

Green vs. Blue

https://greenvblue.npeercy.com/
1•greenwallnorway•16m ago•0 comments

Sony to Transfer Home Entertainment Operations to Tcl-Led Joint Venture

https://xthe.com/news/sony-tv-business-tcl/
1•Sandhyaseo•17m ago•1 comments

Negotiating Relationships with ChatGPT

https://arxiv.org/abs/2601.13188
2•7777777phil•18m ago•0 comments

Why Submit to AI in Production: Speaking as a Tool for Better Work

https://www.r-bloggers.com/2026/01/why-submit-to-ai-in-production-speaking-as-a-tool-for-better-w...
1•7777777phil•20m ago•0 comments

Crates.io: Development Update

https://blog.rust-lang.org/2026/01/21/crates-io-development-update/
2•quapster•21m ago•0 comments

AT&T Archives: The Unix Operating System (1972) [video]

https://www.youtube.com/watch?v=tc4ROCJYbm0
1•vismit2000•22m ago•0 comments

Agentic RAG for Dummies

https://github.com/GiovanniPasq/agentic-rag-for-dummies
1•thunderbong•23m ago•0 comments

Mnemonic BTC Slots

https://coinables.github.io/mnemonic-slots/#
1•nicholasbraker•24m ago•0 comments

Welcome to Niji V7

https://nijijourney.com/blog/niji-7
1•ankitg12•26m ago•0 comments

Show HN: A Spectrum Album – Structuring AI-Generated Music with Suno

https://karbeyazalbum.replit.app/
2•ersinesen•26m ago•0 comments

Accidentally making $1000 for finding Security Bugs as a Back end Developer

https://not-afraid.medium.com/accidentally-making-1000-for-finding-security-bugs-as-a-backend-dev...
1•birdculture•27m ago•0 comments

Show HN: BSS Blue Hive Guide

https://www.bluehiveguide.com/index.html
1•andy846851797•27m ago•0 comments

Git Show

https://tonystr.net/blog/git
1•TonyStr•29m ago•0 comments

Show HN: LLM fine-tuning without infra or ML expertise

https://www.tinytune.xyz/
2•Jacques2Marais•29m ago•1 comments

Ask HN: How do you manage your morning catch-up routine?

2•Peterz_shu•32m ago•1 comments

Show HN: I built an enterprise weather intelligence platform with Lovable

https://preview--chrono-strata.lovable.app/shop
2•lavandar-admin•35m ago•0 comments

From 75% to 99.6%: The Math of LLM Ensembles

https://www.shibaprasadb.com/2026/01/20/llm-ensemble.html
2•bluebirdfirewin•35m ago•0 comments

Remote Authentication By-Pass in Telnetd (2026)

https://seclists.org/oss-sec/2026/q1/89
3•faebi•37m ago•0 comments

Community-driven peer review: platform to review, rate, access research freely

https://test.scicommons.org/
3•DrSAR•37m ago•0 comments

Show HN: Encrypter.site – Browser-based E2E encryption for text and files

https://www.encrypter.site/
2•zealer•37m ago•0 comments

The UK government is backing AI that can run its own lab experiments

https://www.technologyreview.com/2026/01/20/1131462/the-uk-government-is-backing-ai-scientists-th...
3•Brajeshwar•38m ago•0 comments

Can you slim macOS down?

https://eclecticlight.co/2026/01/21/can-you-slim-macos-down/
3•ingve•38m ago•0 comments
Open in hackernews

cURL removes bug bounties

https://etn.se/index.php/nyheter/72808-curl-removes-bug-bounties.html
150•jnord•2h ago

Comments

eknkc•1h ago
A list of the slop if anyone is interested:

https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...

shusaku•1h ago
> To replicate the issue, I have searched in the Bard about this vulnerability.

Seeing Bard mentioned as an LLM takes me back :)

golem14•1h ago
I looked at two reports, and I can’t tell if the reports are directly from an ai or some very junior student not really understanding security. LLms to me sound generally more convincing.
mirekrusin•34m ago
Some (most?) are llm chat copy paste addressing non existing users in conversations like [0] - what a waste of time.

[0] https://hackerone.com/reports/2298307

plastic041•1h ago
In the second report, Daniel greeted the slopper very kindly and tried to start a conversation with them. But the slopper calls him by the completely wrong name. And this was December 2023. It must have been extremely tiring.
johncoltrane•52m ago
> slopper

First new word of 2026. Thank you.

andrewflnr•36m ago
Slop-monger is the term I've seen, and the more evocative one I think.
ares623•34m ago
Sloperator
kakacik•10m ago
Slopster
OsrsNeedsf2P•1h ago
Honestly infuriating to read. I'm so surprised cURL put up with this for so long
worldsavior•57m ago
All of those reports are clearly AI and it's weird seeing the staff not recognizing it as AI and being serious.
ares623•50m ago
Orc, meet hobbits.
potatoproduct•28m ago
I thought the same, except I realised some of the reports were submitted back in 2023 before AI slop exploded.
dlcarrier•1h ago
An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

Then again, I once submitted a bug report to my bank, because the login method could be switched from password+pin to pin only, when not logged in, and they closed it as "works as intended", because they had decided that an optional password was more convenient than a required password. (And that's not even getting into the difference between real two-factor authentication the some-factor one-and-a-half-times they had implemented by adding a PIN to a password login.) I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.

Assuming the host of the bug bounty program is operating in good faith, adding some kind of barrier to entry or punishment for untested entries will weed out submitters acting in bad faith.

gamer191•1h ago
Agreed, although the reimbursement should be based on whether a reasonable person could consider that to be a vulnerability. Often it’s tricky for outsiders to tell whether a behaviour is expected or a vulnerability
fredrikholm•1h ago
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

I refer to this as the Notion-to-Confluence cost border.

When Notion first came out, it was snappy and easy to use. Creating a page being essentially free of effort, you very quickly had thousands of them, mostly useless.

Confluence, at least in west EU, is offensively slow. The thought of adding a page is sufficiently demoralizing that it's easier to update an existing page and save yourself minutes of request time outs. Consequently, there's some ~20 pages even in large companies.

I'm not saying that sleep(15 * SECOND) is the way to counter, but once something becomes very easy to do at scale, it explodes to the point where the original utility is now lost in a sea of noise.

teekert•1h ago
It strange how sensitive humans are to these sort of relative perceived efforts. Having a charged, cordless vacuum cleaner ready to go and take around the house has also changed our vacuuming game. Because carrying a big unwieldy vacuum cleaner and needing to find a power socket at every location just feels like much more effort. Even though it really isn't.
jraph•49m ago
> Consequently, there's some ~20 pages even in large companies.

As someone working on Confluence to XWiki migration tools, I wish this was remotely true, my life would be way easier (and probably more boring :-)).

arionmiles•37m ago
I find this to be a very amusing critique. In my experience, Notion (when I stopped using it 3 years ago) was slow as molasses. Slow to load, slow to update. In comparison, at work, I almost exclusively favor Confluence Cloud. It's very responsive for me.

We have tons of Confluence wikis, updated frequently.

nospice•1h ago
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.

The problem is that bug bounty slop works. A lot of companies with second-tier bug bounties outsource triage to contractors (there's an entire industry built around that). If a report looks plausible, the contractor files a bug. The engineers who receive the report are often not qualified to debate exploitability, so they just make the suggested fix and move on. The reporter gets credit or a token payout. Everyone is happy.

Unless you have a top-notch security team with a lot of time on their hands, pushing back is not in your interest. If you keep getting into fights with reporters, you'll eventually get it wrong and you're gonna get derided on HN and get headlines about how you don't take security seriously.

In this model, it doesn't matter if you require a deposit, because on average, bogus reports still pay off. You also create an interesting problem that a sketchy vendor can hold the reporter's money hostage if the reporter doesn't agree to unreasonable terms.

zrm•31m ago
Triage gets outsourced because the quality of reports is low.

If filing a bad report costs money, low quality reports go down. Meanwhile anyone still doing it is funding your top notch security team because then they can thoroughly investigate the report and if it turns out to be nothing then the reporter ends up paying them for their time.

notpushkin•28m ago
I don’t think it works for curl though. You would guess that sloperators would figure out that their reports aren’t going through with curl specifically (because, well, people are actually looking into them and can call bullshit), and move on.

For some reason they either didn’t notice (e.g. there’s just too many people trying to get in on it), or did notice, but decided they don’t care. Deposit should help here: companies probably will not do it, so when you see a project requires a deposit, you’ll probably stop and think about it.

saghm•1h ago
That anecdote is hilarious and scary in equal measures. Optional passwords are certainly more convenient than required ones, but so are optional PINs. The most convenient UX would be never needing to log in at all! Unless you find it inconvenient for others to have access to your bank account of course
sersi•52m ago
I really hate the current trend of not having passwords. For example perplexity doesn't have a password, just an email verification to login.
eXpl0it3r•29m ago
I hate this as well, especially since I have greylisting enabled on some email addresses, so by the time the email login is delivered, the login session has already timed out and of course the sender uses different mail servers everytime. So in some cases, it's nearly impossible to login and takes minutes...
6510•21m ago
Long long ago the google toolbar queries could be reverse engineered to do an i feel lucky search on gmail. I created a login that (if @gmail.com) forwarded to the specific mail.

Unlikely to happen but it seems fun to extend email [clients] with uri's. It is just a document browser, who cares how they are delivered.

bawolff•37m ago
Bug bounties often involve a lot of risk for submitters. Often the person reading the report doesn't know that much and misinterprets it. Often rules are unclear about what sort of reports are wanted. A pay to enter would increase that risk.

Honestly bug bounties are kind of miserable for both sides. I've worked on the recieving side of bug bounty programs. You wouldnt believe the shit that is submitted. This was before AI and it was significant work to sort through, i can only imagine what its like now. On the other hand for a submitter, you are essentially working on spec with no garuntee your work is going to be evaluated fairly. Even if it is, you are rolling the dice that your report is not a duplicate of an issue reported 10 years ago that the company just doesn't feel like fixing.

ANarrativeApe•26m ago
Pay to enter would increase the risk of submitting a bug report. However, if the submission fees were added to the bounty payable, then the risk reward changes in favour of the submitter of genuine bugs. You could even have refund the submission fee in the case of a good faith non bug submission. A little game theory can go a long way in improving the bug bounty system...
CTDOCodebases•16m ago
They could allow submitters to double down on submissions escalating the bug to more skilled and experienced code reviewers who get a cut of the doubled submission fee for reviews.
bawolff•14m ago
If a competent neutral party was evaluating them, i would agree. However currently these things tend to be luck of a draw.
eterm•26m ago
Indeed, increasing the incentive for companies to reject ( and then sometimes silently fix anyway ) even the valid reports would only increase further misery for everyone.
dmurray•13m ago
cURL would operate such a program in good faith, and quickly earn the trust of the people who submit the kind of bug reports cURL values.

Your bank would not. Nor would mine, or most retail banks.

If the upfront cost would genuinely put off potential submitters, a cottage industry would spring up of hackers who would front you the money in return for a cut if your bug looked good. If that seems gross, it's really not - they end up doing bug triage for the project, which is something any software company would be happy to pay people for.

sudahtigabulan•8m ago
> I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.

Sadly, yeah. And will do anything only if they believe they can actually be caught.

An EU-wide bank I used to be customer of until recently, supported login with Qualified Electronic Signatures, but only if your dongle supports... SHA-1. Mine didn't. It's been deprecated at least a decade ago.

A government-certified identity provider made software that supposedly allowed you to have multiple such electronic signatures plugged in, presenting them in a list, but if one of them happened to be a YubiKey... crash. YubiKey conforms to the same standard as the PIV modules they sold, but the developers made some assumptions beyond the standard. I reported it, and they replied that it's not their problem.

plastic041•1h ago
related: cURL stopped HackerOne bug bounty program due to excessive slop reports https://news.ycombinator.com/item?id=46678710
novalis78•1h ago
Just use an LLM to weed them out. What’s so hard about that?
eqvinox•1h ago
At this point it's impossible to tell if this is sarcasm or not.

Brave new world we got there.

vee-kay•1h ago
Set a thief to catch a thief.
bootsmann•1h ago
If AI can't be trusted to write bug reports, why should it be trusted to review them?
GalaxyNova•1h ago
Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.
colechristensen•5m ago
I don't know, I prompted Opus 4.5 "Tell me the reasons why this report is stupid" on one of the example slop reports and it returned a list of pretty good answers.[1]

Give it a presumption of guilt and tell it to make a list, and an LLM can do a pretty good job of judging crap. You could very easily rig up a system to give this "why is it stupid" report and then grade the reports and only let humans see the ones that get better than a B+.

If you give them the right structure I've found LLMs to be much better at judging things than creating them.

Opus' judgement in the end:

"This is a textbook example of someone running a sanitizer, seeing output, and filing a report without understanding what they found."

1. https://claude.ai/share/8c96f19a-cf9b-4537-b663-b1cb771bfe3f

f311a•1h ago
How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.

The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.

ChrisArchitect•1h ago
Previously: https://news.ycombinator.com/item?id=46617410

https://news.ycombinator.com/item?id=46678710

jameslk•1h ago
It seems open source loses the most from AI. Open source code trained the models, the models are being used to spam open source projects anywhere there's incentive, they can be used to chip away at open source business models by implementing paid features and providing the support, and eventually perhaps AI simply replaces most open source code
giancarlostoro•1h ago
I wouldn't say open source code solely trained the models, surely there are CS courses and textbooks, official documentation as well as transcripts of talks and courses all factor in as well.

On another note, regarding AI replacing most open source code. I forget what tool it was, but I had a need for a very niche way of accessing an old Android device it was rooted, but if I used something like Disk Drill it would eventually crap out empty files. So I found a GUI someone made, and started asking Claude to add things I needed for it to a) let me preview directories it was seeing and b) let me sudo up, and let me download with a reasonable delay (1s I think) which basically worked, I never had issues again, it was a little slow to recover old photos, but oh well.

I debated pushing the code changes back into github, it works as expected, but it drifted from the maintainers own goals I'm sure.

ValveFan6969•50m ago
"open source" and "business model" in the same sentence... next you're gonna tell me to eat pudding with a fork.
jameslk•41m ago
https://en.wikipedia.org/wiki/Business_models_for_open-sourc...

I think you should try eating pudding with a fork next

bigstrat2003•36m ago
I mean... not what the other poster meant, but https://en.wikipedia.org/wiki/Sticky_toffee_pudding exists and is absolutely delicious.
jameslk•5m ago
Flan is also a type of pudding (milk/eggs base) which can be ate with a fork. Other baked custards too
robin_reala•7m ago
You’d hardly eat black pudding with a spoon. https://en.wikipedia.org/wiki/Black_pudding
Aeglaecia•38m ago
i believe that the existence of not for profit organizations is a valid counterpoint to whatever your argument is
Grollicus•35m ago
Just leaving this here: https://en.wikipedia.org/wiki/Pudding_mit_Gabel
bawolff•34m ago
> they can be used to chip away at open source business models by implementing paid features and providing the support

There are a lot of things to be sad about AI, but this is not it. Nobody has a right to a business model, especially one that assumes nobody will compete with you. If your business model relies on the rest of the world bring sucky so you can sell some value-added to open-core software, i'm happy when it fails.

shubhamjain•12m ago
I feel AI will have the same effect degrading Internet as social media did. This flood of dumb PRs, issues is one symptom of it. Other is AI accelerating the trend which TikTok started—short, shallow, low-effort content.

It's a shame since this technology is brilliant. But every tech company has drank the “AI is the future” Kool-aid, which means no one has incentive to seriously push back against the flood of low-effort, AI-generated slop. So, it's going to be race to the bottom for a while.

ares623•51m ago
Alternate headline: AI discovering so many exploits that cybersecurity can't keep up

Am I doing this right?

bawolff•43m ago
There is a difference between AI discovering real vulnerabilities (e.g. the ffmpeg situation), and AI being used to spam fake vulnerabilities
potatoproduct•26m ago
It's easy to discover an exploit when you're hallucinating:)
bilekas•48m ago
I just read one of the slop submissions and it's baffling how anyone could submit these with a straight face.

https://hackerone.com/reports/3293884

Not even understanding the expected behaviour and then throwing as much slop as possible to see what sticks is the problem with generative AI.

Snakes3727•46m ago
The company I work for has a pretty bad bounty system (basically a security@corp email). We have a demo system and a public API with docs. We get around 100 or more emails a day now. Most of it is slop, scams, or my new favourite AI security companies sending us an AI generated pentest un prompted filled with false positives, untrue things, etc. It has become completely useless so no one looks at it.

I had a sales rep even call me up basically trying to book a 3 hour session to review the AI findings unprompted. When I looked at the nearly 250 page report, and saw a critical IIS bug for Windows server (doesn't exist) existing at a scanned IP address of 5xx.x.x.x (yes an impossible IP) publically available in AWS (we exclusively use gcp) I said some very choice words.

nottorp•36m ago
What I wonder is if this will actually reduce the amount of slop.

Bounties are a motivation, but there's also promotional purposes. Show that you submitted thousands of security reports to major open source software and you're suddenly a security expert.

Remember the little iot thing that got on here because of a security report complaining, among other things, that the linux on it did not use systemd?

bawolff•31m ago
I dont think bounties make you an "expert". If you want to be deemed an expert, write blogs detailing how the exploit works. You can do that without a bounty.

In many ways one of the biggest benefits of bug bounties is having a dedicated place where you can submit reports and you know the person on the other end wants them and isn't going to threaten to sue you.

For the most part, the money in a bug bounty isn't work the effort needed to actually find stuff. The exception seens to be when you find some basic bug, that you can automate scan half the internet and submit to 100 different bug bounties.

nottorp•27m ago
> I dont think bounties make you an "expert".

It depends to who.

> If you want to be deemed an expert, write blogs detailing how the exploit works.

That's necessary if you sell your services to people likely to enjoy HN.

Springtime•2m ago
Outside of direct monetary gain like bounties are efforts to just stand out, in terms of being able to show contributions to a large project or getting say a CVE.

Stenberg has actually written about invalid/wildly overrated vulnerabilities that get assigned CVEs on their blog a few times and those were made by humans. I often get the sense some of these aren't just misguided reporters but deliberate attempts to make mountains out of molehills for reputation reasons. Things like this seem harder to account for as an incentive.

arjie•1m ago
It makes sense. This process of searching for bugs was slow and time-consuming so it needed to be incentivized. This is no longer the case. Now the hard part is in identifying which ones are real.

To paraphrase a famous quote: AI-equipped bug hunters find 100 out of every 3 serious vulnerabilities.