frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
37•salsakran•1h ago

Comments

marginalx•1h ago
Clearly for commercial oriented opensource software, security through obscurity is one way to keep the pace in the short term. Not an option for proper open source software. Will this be the case that people who use open source software that is easily detectable will also start to shy away from using them for the fear of zero-days?

One of the benefits of Open source has been that there are more eye balls on the source, leading to more secure code/better quality. I think given enough time the bug reports will plateau and we will be back to a normal cadence - once the tsunami is over, hopefully things will settle at a more manageable cadence .

dynawicki•50m ago
This benefit you speak of is actually just a meme.

Source that is unmaintained is dead. Nobody is looking at it, even the maintainer has something better to do.

Do you know whats even more powerful than "eyeballs"? Money.

salsakran•49m ago
I'm not sure that the benefit of many eyes helps here. So much of this bulk scanning is low-effort, and if you're a smart person developing closed source software you get the benefits of bulk scanning, but _at the time of your choosing_ .

OSS has always had tradeoffs and I sadly think this one is going straight to the "Cons" column. We still think the Pros outweigh the Cons, but this is NotGreat.

Joel_Mckay•47m ago
Lets be honest, LLM with fuzzers are going to pound any llvm generated binary right in the hubris.

Won't matter if is closed source, signed, and or obfuscated. =3

dynawicki•53m ago
Good luck getting anyone who values their time to even triage the results. I would rather lick the bottom of a NYC dumpster that a rat had just died in.
salsakran•47m ago
That was true last year -- things changes.

Ignore (admittedly low-effort LLM generated) reports at your own peril.

dynawicki•45m ago
Software will eventually become "unmaintainable due to lack of interest", because of this very thing. People not invested in this are not "in peril" in any way.
bluGill•17m ago
A lot of people are invested without realizing it. I'm typing this on a computer running linux, with all the standard services/software. I maintain one OSS project (icecc - we have always said only run on trusted networks. I'm sure there are a lot of issues in our code but nobody has bothered run a scan yet to my knowledge), but I don't pay attention to everything. I'm sure there are known easy to exploit (with a LLM) issues on this computer just because my distro hasn't updated yet. (I need a better distro, but even the most up to date will constantly have these issues)
dynawicki•10m ago
What you just described may be accurate. But it also is the essence of a "trap". My comment about investment was more to that point.

If software "is a trap", even my ever-computing loving wrote first programs on an Apple II in the 80s will only be as you sort of describe invested in by reference (minimal usage).

But no-one will sign up for a "trap" as a career, and only those who do will deal with its problems. The first thing that comes to mind is "Johns", "Hotels", and the trappings of the sex trade.

gmuslera•44m ago
The problem on the side of closed source software is that if there had been leaks of source code, the vulnerabilities and exploits may remain unknown for long time.
pixl97•41m ago
I would go to say that most closed source software code gets leaked. Most companies hold that info close and don't disclose it, even if legally required unless it's made public.
aetherspawn•42m ago
Say I had $1000, how do I get the best value for money to discover vulnerabilities? Are there any worthwhile LLM powered services that are turnkey and ready to go?
ben_w•24m ago
From what I've heard, every LLM before Mythos (which you can't get, they'll call you if you're big enough) will have far too many false positives to be helpful, so I guess the best option would be to use an agent to help you (not lights-off vibe coding!*) take advantage of all the older tools like valgrind and closing all the compiler warnings?

* I presume I'm not the only one to find the agents tasked with adding unit tests will sometimes try to sneak through "open source code and apply regex to confirm presence or absence of specific string literal".

They can speed you up significantly, but you absolutely do need to pay attention to what they produce.

salsakran•19m ago
With all respect to the Anthropic folks, that's just marketing. (If they're reading this: let us into the program so I can be proven wrong here.)

I'm sure what they have is awesome, but it's clear that there are people out there with some decent prompts that are getting results out of widely available models as well.

The big thing we're sharing is: bulk scanning by random people in random geographies got a _lot_ better around January, it's widely distributed, and it's going to get a lot better regardless of whether that specific version of Mythos becomes widely available or not.

embedding-shape•18m ago
> prompts that are getting results out of widely available models as well.

Absolutely, and the "false-positive" issue people keep citing as why Mythos is so good is easily solved in the harness, simplest solution is starting fresh context with another prompt to evaluate if it's a false-positive or not, just adding that drastically cuts down the rate.

bluGill•15m ago
That is false. A year ago every LLM generated report was slop - more likely a false positive than correct. However in the past few months nearly every LLM generated report is real.
embedding-shape•19m ago
Not sure about turnkey solutions for finding vulnerabilities that doesn't involve having to hand over a bunch of identity proofs for them to store on their insecure infra and also enrolling in programs.

Besides that, hiring a beefy GPU instance at Vast.ai or similar places then running your own uncensored models on it, I've had great success with AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4, smart + uncensored, but there are lots of options, probably some are already tailored for security research.

adamtaylor_13•36m ago
> Did you have other plans for the weekend? Or a long term project you’re prioritizing? That’s nice, you have a new plan — fix every vulnerability that comes in NOW.

Umm... no? It's called OPEN source. Expecting people to cancel their plans to make your free software more secure is pretty audacious. Luckily, many WILL, but the expectation is just foolish.

salsakran•34m ago
That line was aimed at other OSS maintainers.

These alerts are absolutely not being shared publicly before we have a fix for them.

salsakran•35m ago
Side conversation -- This is all stuff we're seeing in white/grey hat land. What's going on in blackhat land?
bluGill•27m ago
Nobody really knows of course. However it is safe to assume they are not so stupid as to ignore what is happening in the other areas (at least some of them), and so they are running their own targeted scans and then trying to figure out how to make money (or whatever their goal is) by exploiting them. They are also using LLMs to try things on closed source that are more than a brute force attack, though I have no idea what those would be.
as3qkaH•31m ago
Apparently the AI company Metabase has a very poor code base. Like so many others, instead of questioning their own (or AI) output, they help their AI overlords by promoting security scans.

Fact is that Mythos found only one issue in curl and nothing at all in most code bases. It is getting quiet around Mythos, and the AI companies will move on to the next scam.

bluGill•23m ago
Mythos found only one issue in curl - but it didn't start until many other LLMs had been run and found a lot of issues that were fixed. If Mythos was run a year ago it would have found over 100 issues (of course it didn't exist a year ago, nor did the other tools).
4ladf1•17m ago
Curl had many old protocols and code from the 1990s that no one used. Besides, Mythos was claimed to be better than existing tools.

In most open source projects, Mythos or similar tools have found nothing. The AI people only contact the projects where they find something, because it would be bad for marketing otherwise.

_alternator_•19m ago
The article focuses on OSS, but closed-source software is at major risk too. Perhaps more.

It's gotten much easier to reverse engineer binaries in general, and security patches in particular. Basically, an LLM can turn binaries into 'readable' code, and then reason about said code.

salsakran•17m ago
Perhaps -- but I think for most people, the vast majority of proprietary software they consume is over the network.

But yeah, if you're distributing binaries publicly, then you're going to have very similar problems.

redanddead•11m ago
That happens a lot though, even OpenAI is attempting to lock functionality (like computer-use, 2 weeks ago) behind a binary -- Mac only they said, no EU. I saw a guy crack it the same day, ported to Windows*. There are many many things like Rive that use binaries, obfuscation and uglification has been the name of the proprietary game for a long ass time guys like "surely nobody would go through that trouble", yeah an LLM would ralph loop through it all day long, and make what you paid good money for pretty much free for anyone to use whenever they feel like it, we're back to "you wouldn't download a car would you?" levels

*that was two entire weeks ago, what I'm seeing now makes that guy's binary crack look like a toy, it's becoming systematized now

hrjriritifif•7m ago
I do not think author understands how opensource works. You have a problem on your computer, in __your__ software, and somehow some random dude is responsible for fixing it? Sure if you gimme a few kilo USDs I will drop everything and come to rescue you. But for free it is a volunteer gig I do once a month....
le-mark•7m ago
So what does this mean for the open source ecosystem? Unmaintained or “finished” projects will be labeled as to unsafe to use?

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
102•yminsky•2h ago•6 comments

Too dangerous or just too expensive? The real reason Anthropic is hiding Mythos

https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-h...
26•chbint•24m ago•12 comments

Show HN: Find the best local LLM for your hardware, ranked by benchmarks

https://github.com/Andyyyy64/whichllm
198•andyyyy64•3h ago•26 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
234•smusamashah•4h ago•59 comments

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
37•salsakran•1h ago•29 comments

Radicle: Sovereign {code forge} built on Git

https://radicle.dev/
30•KolmogorovComp•1h ago•3 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
936•arkadiyt•20h ago•482 comments

SigNoz (YC W21, open source Datadog) Is hiring for growth and engineering roles

https://signoz.io/careers
1•pranay01•1h ago

UK government replaces Palantir software with internally-built refugee system

https://www.bbc.com/news/articles/c2l2j1lxdk5o
346•cdrnsf•14h ago•121 comments

Show HN: GlycemicGPT – Open-source AI-powered diabetes management

https://github.com/GlycemicGPT/GlycemicGPT
55•jlengelbrecht•8h ago•36 comments

A few words on DS4

https://antirez.com/news/165
355•caust1c•14h ago•145 comments

NanoTDB – Golang Append-Only Time Series DB

https://github.com/aymanhs/nanotdb
11•aymanhs72•2h ago•0 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
57•adamnemecek•20h ago•14 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
185•kspacewalk2•9h ago•66 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
632•allenleee•21h ago•147 comments

Steve Jobs Next Computer: His Forgotten Exile Years

https://spectrum.ieee.org/steve-jobs-next-computer
52•rbanffy•2h ago•45 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
392•quadrige•18h ago•102 comments

Gyroflow: Video stabilization using gyroscope data

https://github.com/gyroflow/gyroflow
118•nateb2022•3d ago•19 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
376•mikeevans•17h ago•183 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
399•hetsaraiya•19h ago•89 comments

Cursing the government does not fix potholes. Spray-painting them does

https://imagenotfound.writeas.com/the-holes-we-painted-and-why-we-did-it-anyway
9•bogomil•18m ago•8 comments

Bitwarden scrubs 'Always free' and 'Inclusion' values from its site

https://www.fastcompany.com/91542655/bitwarden-scrubs-always-free-and-inclusion-values-from-its-w...
19•gpi•1h ago•5 comments

Mullvad exit IPs are surprisingly identifying

https://tmctmt.com/posts/mullvad-exit-ips-as-a-fingerprinting-vector/
455•RGBCube•10h ago•266 comments

Claude for Legal

https://github.com/anthropics/claude-for-legal
120•Einenlum•16h ago•114 comments

Tesla Wall Connector bootloader bypasses the firmware downgrade ratchet

https://www.synacktiv.com/en/publications/exploiting-the-tesla-wall-connector-from-its-charge-por...
109•p_stuart82•16h ago•52 comments

Solar-based sleep patterns compared to modern norms

https://dylan.gr/1775146616
90•James72689•8h ago•78 comments

UK sovereign LLM inference

https://relax.ai/docs
89•benjamintnorris•3h ago•98 comments

HDD Firmware Hacking

https://icode4.coffee/?p=1465
203•jsploit•20h ago•28 comments

RISC-V Router

https://router.start9.com/
131•janandonly•17h ago•75 comments

What's in a GGUF, besides the weights – and what's still missing?

https://nobodywho.ooo/posts/whats-in-a-gguf/
160•bashbjorn•19h ago•48 comments