frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Wonders of AI: We Are Retiring Our Bug Bounty Program

https://turso.tech/blog/the-wonders-of-ai
145•tjek•1h ago•79 comments

A 0-click exploit chain for the Pixel 10

https://projectzero.google/2026/05/pixel-10-exploit.html
66•happyhardcore•1h ago•17 comments

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
150•yminsky•4h ago•22 comments

Power Tools Got Worse on Purpose. Who Owns DeWalt, Craftsman, and Milwaukee?

https://www.worseonpurpose.com/p/your-power-tools-got-worse-on-purpose
94•prawn•2h ago•41 comments

Trade Dollars with other startups. Book it as revenue

https://www.revswap.ai/
78•tormeh•1h ago•33 comments

ASCII by Jason Scott

https://ascii.textfiles.com/
22•bookofjoe•53m ago•2 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
299•smusamashah•6h ago•73 comments

High dimensional geometry is transforming the MRI industry(2017) [pdf]

https://www.ams.org/government/DonohoPresentation06-28-17Final.pdf
24•nill0•1h ago•1 comments

Show HN: Find the best local LLM for your hardware, ranked by benchmarks

https://github.com/Andyyyy64/whichllm
254•andyyyy64•5h ago•47 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
968•arkadiyt•21h ago•505 comments

Radicle: Sovereign {code forge} built on Git

https://radicle.dev/
82•KolmogorovComp•2h ago•17 comments

SigNoz (YC W21, open source Datadog) Is hiring for growth and engineering roles

https://signoz.io/careers
1•pranay01•2h ago

UK government replaces Palantir software with internally-built refugee system

https://www.bbc.com/news/articles/c2l2j1lxdk5o
400•cdrnsf•16h ago•148 comments

Too dangerous or just too expensive? The real reason Anthropic is hiding Mythos

https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-h...
109•chbint•2h ago•107 comments

Amazon workers under pressure to up their AI usage–so they're making up tasks

https://www.fastcompany.com/91541586/amazon-workers-pressured-to-up-ai-use-extraneous-tasks
49•hackernj•1h ago•31 comments

A few words on DS4

https://antirez.com/news/165
376•caust1c•16h ago•155 comments

The old world of tech is dying and the new cannot be born

https://www.baldurbjarnason.com/2026/the-old-world-of-tech-is-dying/
91•speckx•2h ago•50 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
203•kspacewalk2•10h ago•79 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
68•adamnemecek•22h ago•15 comments

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
64•salsakran•3h ago•46 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
647•allenleee•23h ago•151 comments

Check Your Fucking Sources, People

https://brodzinski.com/2026/05/check-fcking-sources.html
18•flail•48m ago•7 comments

NanoTDB – Golang Append-Only Time Series DB

https://github.com/aymanhs/nanotdb
15•aymanhs72•4h ago•3 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
400•quadrige•20h ago•106 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
398•mikeevans•18h ago•200 comments

Gyroflow: Video stabilization using gyroscope data

https://github.com/gyroflow/gyroflow
129•nateb2022•3d ago•21 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
408•hetsaraiya•21h ago•96 comments

Steve Jobs Next Computer: His Forgotten Exile Years

https://spectrum.ieee.org/steve-jobs-next-computer
76•rbanffy•4h ago•74 comments

Mullvad exit IPs are surprisingly identifying

https://tmctmt.com/posts/mullvad-exit-ips-as-a-fingerprinting-vector/
490•RGBCube•12h ago•299 comments

Claude for Legal

https://github.com/anthropics/claude-for-legal
148•Einenlum•17h ago•125 comments
Open in hackernews

The Wonders of AI: We Are Retiring Our Bug Bounty Program

https://turso.tech/blog/the-wonders-of-ai
134•tjek•1h ago

Comments

k2xl•1h ago
Isn't there some alternative approach? I.e when someone submit ai slop they get a strike. Three strikes and you are suspended from submitting to the bug bounty for x months/years?

*Edit - I get it. It seems like the authentication is a challenge.

moron4hire•1h ago
They mentioned they had identified alternatives but it would be costly to implement them. One can imagine that ban evading by generating a new user account would be easy for an LLM agent. It's going to be a long, long game if whack-a-mole.
vrighter•1h ago
you still need to spend effort reviewing the code to figure out when you can give a strike. Thrice for an actual ban. This would still waste precious maintainer time.
JoshTriplett•1h ago
https://en.wikipedia.org/wiki/Sybil_attack

New identities are cheap.

icoder•53m ago
I think that's the problem, or at least a problem, and a growing one.
blharr•57m ago
Such a person can just make a new account and go back at it
empath75•57m ago
This probably gets solved outside of the level of an individual project. No small team can handle this without building a whole product just to handle the bug bounty.
mapt•53m ago
How about "It costs $1000 to submit a bug bounty for approval", and raise the reward to $2000 (or $5000 if it's in the cards, since that will have a deterrant impact on non-AI responses).

Denominated in BTC to avoid chargebacks etc.

JoshTriplett•28m ago
I think that's entirely sensible. Doesn't even have to be that expensive, just expensive enough to deter people who go "oooh, free money", and expensive enough to compensate for having to review slop far enough to realize it's slop.
icoder•22m ago
Wouldn't be surprised if a dollar per entry already made a whole lot of difference.
ToucanLoucan•58m ago
Oh look it's more of exactly what AI skeptics said would happen: low effort bullshit generated at scale making life hell for people actually trying to make things. That's wild.

Edit: it is genuinely wild, I don't know of another product category that selects so perfectly for the WORST type of person to be it's enthusiast. Just every single person I see hyped about AI is fucking insufferable on at least one and usually multiple axis.

jquery•46m ago
I think people would be more interested in listening to "AI skeptics" if they offered realistic solutions to the problems they predict. Pandora's box has been opened, let's deal with the consequences now instead of trying to shut the box which cannot be shut.
ToucanLoucan•42m ago
> I think people would be more interested in listening to "AI skeptics" if they offered realistic solutions to the problems they predict.

AI is the fucking problem. Yes, it has (some) uses. It is not nearly the number advertised. And more and more the median use case seems to be, again, overloading people actually trying to do work with an avalanche of bullshit.

The solution is exactly what the linked article says: shut it down. The AI people have ruined another good thing that was both beneficial to the project, and to a number of individuals.

dale_glass•31m ago
> The solution is exactly what the linked article says: shut it down.

At this point it's impossible, so I concur with the parent: forget about the shutting it down and think of something actually realistic.

jcgrillo•21m ago
> forget about the shutting it down and think of something actually realistic.

Why is it not realistic? Small teams do excellent work. Keep your team small and trusted. Only accept contributions from your team, and people outside your team who are personally vouched for by someone on your team. It's like climbing mountains or sailing or any other type of inherently risky activity--you don't go out with people you don't trust. It's eminently possible, you just don't like the idea of it.

dale_glass•17m ago
That's not shutting anything down, that's just being selective with what you accept, and everyone did that already to some extent.

Even pre-AI it was obvious that contributions have to be vetted for a bunch of reasons.

jcgrillo•5m ago
Right, so the Github "open contributions" model where anyone can open an issue or a PR or otherwise waste a maintainer's time is broken. Fundamentally insecure under this type of attack. Now that the exploit is being used widely, and costing us immensely, we need to put a lid on it. If the only way to guarantee an AI bot (or its meatspace sock puppet) doesn't waste your time is to move to a "look but don't touch" model, then that's what we need to do. I think this would be a reasonable default:

Public repos are read only except for contributors who have been given specific permission, and those permissions are granular e.g. in order of increasing damage potential:

- comment on issue

- create issue

- comment on PR

- create PR

- run CI against PR

- etc.

In other words, shut it down.

miyoji•10m ago
This response is incredibly annoying and insufferable. It's only "impossible" at this point because people continually ignored skeptics and anyone warning about exactly these outcomes.

Now that doom is here, it's too late to do anything about it. Just accept the doom!

vovavili•30m ago
What an unreasonably maximalist opinion.
xandrius•41m ago
To be fair, you're not making a compelling case for your team either.
jcgrillo•30m ago
Web3 is the closest analogue in recent memory, but if you go back further to the pre-enlightenment era (and some pockets of more recent history, particularly in isolated rural/colonial regions) you can see similar behaviors. It's mad religious fervor coupled with poor education. They see what their beliefs tell them they should see, and lack the mental rigor to analyze the actual data. Not their fault! It's our fault for letting them into the profession. Other disciplines are much better at keeping these folks outside the gates.
micromacrofoot•15m ago
for every person that's hyping AI there are another 10 just using it to get stuff done without talking about it incessantly
miyoji•8m ago
Yeah, those 10 people are getting really useful stuff done, like submitting nonsense bugs to Turso in an attempt to get bug bounties.
wg0•58m ago
Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.

We all had that one "productive" engineer in our teams who would write huge PRs that would have large swaths of refactoring whether warranted or not and that was way before anyone even could imagine in their wildest dreams that neural networks could generate that huge amounts of code.

The net effect of such a "productive" engineer always was that instead of increasing the team velocity, team would come to a crawling pace because either his PR had to be reviewed in detail eating up all the time and/or if you just did cursory LGTM then they blew up in production meanwhile forcing everyone back to the drawing board but project architecture would have shifted so rapidly due to his "productivity" that no one had a clear picture of the codebase such as what's where except that one "super smart talented productive loyal to the company goals" guy.

vrganj•52m ago
That guy is now running twenty agents in parallel and really scaling up his wonderful impact.
caminante•42m ago
Maybe "Hurricane Hacker" who produces "tactical tornadoes" via agents?
chapinb•48m ago
Sounds a like a tactical tornado, made me think of this paragraph:

“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.” - John Ousterhout, A Philosophy of Software Design

abirch•41m ago
AI can be the ultimate tactical tornado.
Neywiny•43m ago
I was (almost) just that guy for one PR. Removed something like 20% or more of the codebase by leveraging the libraries and external tools we already had in use better, but it meant almost every single thing we were doing had to use the library function instead of the one we wrote. But assuming you have good regression tests and linters, so you know the code works and it's not terrible, the review should be more about overall high level quality instead of poring over every character to check correctness. It was still a pain to review, though
jt2190•31m ago
You’re not an example of what we’re taking about here. Congratulations!

A better example would be if you’d changed the behavior of the library as you did this work, and the library changes introduced hard-to-detect bugs across the application.

triceratops•26m ago
Admirable effort. But why did you have to do it in one PR?
stronglikedan•21m ago
> almost every single thing we were doing had to use the library function instead of the one we wrote
Neywiny•14m ago
As per the other person's comment, yeah basically I could have broken it up but it would've been an arbitrary demarcation. I just deleted our functions and fixed everything that yelled. Admittedly that could've been one and then leveraging the libraries better could've been another, but they would've been 2 PRs that changed almost every line. So done as one to mitigate review time.
limaoscarjuliet•39m ago
"[...] bottleneck isn't in writing the code. It is in reading and understanding the code". 100% agreed! Furthermore, the more code is generated by AI, the fewer people will actually understand it!
CharlesW•8m ago
Generally, software engineers already have little to no understanding of the code that's actually being executed. We're so used to high- and higher-level abstractions like C, Go, Python, and JavaScript that we forget that we're already working with mostly deterministic symbols in a process that more closely resembles invoking magic spells than writing machine code. One more level of abstraction is not the end of software engineering.
nixon_why69•38m ago
Context is everything for massive PRs.

If you don't ever have a massive PR from a dynamite session, then you cannot ever be better than "average and plodding". So the question is, what's the context of the massive PR and how should it be handled?

* Mature product making money, intermediate engineer just refactored everything so it's "better"? Shut the fuck up, kindly please, you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation.

* Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.

Obviously there are many other contexts and these are 2 extremes in a multi-dimensional space. But if the process is "we litigate every line", then that's just not an innovative place to be. Yes, most PRs should be small, targeted, easy to review and tied to a ticket but if you're innovating? By definition it's a little different.

satvikpendem•34m ago
I don't understand why one wouldn't just auto reject big PRs and tell them to make smaller ones. Sounds like it's a communication and social problem, not a technological one.

Even with AI, just tell it to make smaller self contained PRs. I do this with Claude or GPT models and they do just fine.

xienze•25m ago
> Even with AI, just tell it to make smaller self contained PRs.

Do you want one big PR or 100 small ones? You can't escape the sheer volume of code it's going to produce.

tcoff91•15m ago
I want a set of smaller ones if it's practical.
satvikpendem•13m ago
100 small ones for sure. There could be a way to auto reject new PRs from an author if they have X open ones unreviewed.
sparklingmango•25m ago
Power dynamics. Usually the person making the giant PRs is the one with all the sway. An earlier-career engineer is unlikely to push back against that level of influence.
satvikpendem•10m ago
It can be a company wide policy rather than trying to target a single individual even if the outcome is that they are targeted. This is something that should be addressed to them through a manager etc or if not, it's time to leave while they ruin the product over time.
booleandilemma•33m ago
Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.

So all we have to do is write code without reading or understanding it! Larry Wall was right all along!

duskdozer•19m ago
Exactly! They should have set [your agentic AI toolkit could be here!] loose on these issues and 100x'd their output, all while actually shipping fixes to these issues instead of closing them. These Luddites are going to be left in the dust as AI is here to stay!
Rover222•23m ago
The reality is somewhere in the middle. Features are shipping 2x to 5x faster at a lot of organizations, with solid code still being produced and reviewed.

Anyone trying to suggest that AI hasn't sped up quality code production is just insisting on keeping their head in the sand, IMO.

mikemarsh•55m ago
An interesting "conundrum" (at least from my outsider perspective): how many of those bot requests are from agents that utilize Turso on their backends?
jmuguy•54m ago
I wonder what Hacktoberfest would look like now if they were still giving out t-shirts to everyone. Probably not enough cotton in the world.

It can't be on individual maintainers to stop this, imo its on Github (and Gitlab) to stop these sort of accounts from even getting to the point of submitting PRs. Its essentially spam.

Look at the user who created the first PR they reference https://github.com/Samuelsills. This is not an account that should be allowed to do anything close to opening a PR against a well known repo.

embedding-shape•19m ago
An account with zero activity doing nothing shouldn't be allowed to continue doing nothing? Did you share the wrong account here maybe?
MostlyStable•52m ago
Closing the program is totally reasonable. However, there is another option: Make submitters pay a nominal fee that is returned in the case that a real bug is found.
serhack_•48m ago
cool idea
pornel•43m ago
That would add administrative overhead, and even higher incentive for submitters to endlessly argue they're right.
MostlyStable•37m ago
Price it right. At the right price, it pays for everything you are talking about. At an even higher price, it is basically closing the program.

I'm not trying to suggest they _need_ to implement it. Like I said, closing it is reasonable. Completely aside from any other considerations, one could just decide that they don't feel like dealing with it. But there are other options.

xandrius•42m ago
Easily exploitable without much stretch of a thought.

I'd say closing a program which doesn't work anymore is a better idea.

MostlyStable•36m ago
The majority of the exploits I can think of are fixed by setting the correct price. Other suggestions in this thread of denominating in bitcoin fix the other exploitation: chargebacks.

If you can think of something that isn't solved by one of those two mechanisms, I'd be interested in hearing them enumerated.

Lalabadie•33m ago
How so? These bot systems work on volume – there's no regard for how much reviewer time they gobble up. The idea is to make producing reports basically free, so getting 1 in 1000 positives is still a success if you have no regard for externalities.

If they have to pay for reviewer time for each of 1000 reports, then the scheme stops being viable.

soared•34m ago
Moving money is not free, and managing payments/etc can be a huuge headache. Sometimes it’s easy, but sometimes it’s not.
wslh•20m ago
This is one of the cases where crypto works well.
rvz•7m ago
There are many cryptocurrencies that allow anyone to move money quickly, cheaply, and on the same day in less than a minute and requires zero bank accounts.

At this point there isn't an excuse.

user_7832•32m ago
Honestly I think this is a great idea. My only suggestion is instead of being very nominal, it should be "reasonable" (so $10 and not $1).

It's even possible to directly link this to maintainers/employees - if you can review 10 such AI/real things per hour (likely more if it's AI slop that's easy to detect), you're generating another revenue stream. Now, I have no idea if these guys are based in SF Bay or a 3rd world country with low COL but as an "add on", $100 an hour isn't too shabby (and can be on the "low end" if one's good at spotting AI crap.)

Side note, isn't it possible to have some way to verify if the "vulns" are actual vulns or not? ...Heck why not throw an LLM at it, powered by a single $10 submission fee?

KronisLV•10m ago
Sounds like a startup idea to me! Admittedly, the friction and the fact that you have to pay would prevent a lot of legitimate people from participation which sucks.

AI is really throwing a wrench in the economics of software development, isn’t it?

SlinkyOnStairs•14m ago
The problem with that approach is that it will also deter genuine submissions, probably moreso than a "no bounty" system.

For those who encounter bugs as part of their employment, they'd now need to convince their employer to fork over money up front. For most employers, getting them to spend even insignificant money is like pulling teeth.

But even for the self-employed or hobbyists, gambling real money on "are they going to be a jerk about my exploit report". No offense towards Turso, but the bulk of software firms are TERRIBLE about handling reports like that. Many already have unstated policies of screwing people out of deserved bug bounties at every step.

To submit such reports today already requires you to accept that your work is statistically, just going to be a bunch of free labour that you gave away for the betterment of the product's users. Adding a cash fee just further deters submissions, especially once people haven't gotten their money back a few times. (Consider how many "AI detection tools" are themselves incredibly unreliable machine learning or sometimes even LLM systems)

KolmogorovComp•12m ago
Unfortunately this isn't all black-and-white. There are some bug bounty where the company is very eager not to pay any bounty, aggressively marking vulnerabilities as out-of-scope or working-as-intended.

In those case you already lose time, but in the future you would also lose money.

Unfortunately you don't know how a company will react before submitting, especially if it's a small one.

phyzix5761•34m ago
Can't they just beat them at their own game and deploy their own AI bots to pre-screen the PRs?
Aefiam•27m ago
from the article:

> It is possible to set up automated systems to gatekeep this, but with a non-negligible dollar value attached to it, the incentive is just too great for the AIs to just keep arguing, reopening the same PR, etc.

embedding-shape•20m ago
Or, make sure their program doesn't corrupt data so easily, so they don't have to pay out $1000 for every issue others find for them...
satvikpendem•30m ago
Has anyone used Turso in production? It's an SQLite compatible rewrite in Rust but with added features like multiple writer support and being open to external contributions which SQLite is not.

I was thinking of using it for my full stack Rust apps just so everything works with cargo and I don't have to bring in SQLite separately.

Lalabadie•28m ago
Good time to mention this fantastic repo acting as a bot honeypot:

https://github.com/UnsafeLabs/Bounty-Hunters

The corresponding leaderboard:

https://clankers-leaderboard.pages.dev

ChrisMarshallNY•14m ago
That's a great project!

It's likely to get blacklisted by AI bots, soon enough, though.

Havoc•22m ago
Definitely feels like we're heading towards an eternal september (or already arrived).

...large swaths of approaches on online engagement just becoming non-viable

pscanf•20m ago
We sorely need a way to reliably detect AI slop, but unfortunately it doesn't seem possible and it's just getting harder and harder.

Last month I tried my hand at finding a way to tell whether an OSS project is slop or not, based on the amount of "human attention" it received vs the amount of code it contains. The idea is that a 100k LOC project which received 3 days' worth of attention from a human is most certainly slop.

The approach doesn't work very well, though¹, mostly because it's hard to gauge the amount of attention that was given. If I see one commit with +3000 LOC, I can assume it's AI-generated, but maybe you're just the type of dev that commits infrequently.

Maybe we need some sort of "proof of human attention" for digital artifacts, that guarantees that a human spent X time working on it.

¹ I wrote about it here https://pscanf.com/s/352/

ChrisMarshallNY•8m ago
I suspect that it will be impossible, soon. People will just train LLMs to "act human," and pass the various turing tests we throw at them.

I stay pretty busy[0], and have been accused of "gaming" my GH repos.

That's not the case. I'm retired, experienced, and working on software all day, every day. I just don't get paid for it.

I also don't especially care, whether or not anyone thinks I'm a bot. I eat my own dogfood. Most of my work is on modules that I use in my own projects.

[0] https://github.com/ChrisMarshallNY#github-stuff

curtisblaine•17m ago
Bots are using real tokens for this. So, ultimate honeypot idea: post heavily commented skeleton code in a github repo, promise a generous money reward for closing issues and never pay anyone. See the bots swarm and burn their tokens to write code for you.
dakolli•9m ago
I treat LLM users like special needs people, only dumber. I don't really want to take advantage of the dumbest people in society.
mackeye•4m ago
:D https://github.com/UnsafeLabs/Bounty-Hunters
singpolyma3•14m ago
It's a bit odd that this comes today after so many other projects reverse this finding.
overgard•11m ago
The weird thing is it can't be that economically feasible to burn a ton of tokens in the hopes that you might get a bounty.. seems like a great way to set money on fire.