Can't imagine it makes a big difference to them whether google pays out 50k or 2x 50k for high quality bugs relevant to their company
This seems like a perverse incentive creating suboptimal behavior.
One option could be to just open up, like, 10 slots instead, right? Most folks might use this implementation (since it has been posted anyway). That way coming second place in the race isn’t so bad…
The only restriction should be that the first submission of a given bug is the one that gets the reward.
You're brushing over important implemention details - it's not Google running this program but a specific team inside the company that has a fixed budget, a limited headcount and doing the best with what they have.
Your argument is similar to "America has trillions in GDP and could afford to provide twice the number of free treatments to kids with cancer" - it sounds reasonable in aggregate, but breaks down in the specifics; the individual hospital systems, teams and care providers are currently working close to their limits.
Maybe this sort of thing is a market opportunity for the CPU makers?
Add two special instructions, one that iterates a sloth given the seed value, and the final instruction that signs the result cryptographically to prove that it was produced by this model of CPU. Perhaps have hardware rate limiting to prevent overclocking.
Another idea would be to monitor CPU uptime and sign the uptime + a challenge. The uptime is cleared after signing. This is more like proof of stake, where we are proving that we dedicated n seconds of CPU ownership, while allowing the CPU to be used productively otherwise.
Are there really so many exploits that people are racing to get it every month?
[0] - https://old.reddit.com/r/GrapheneOS/comments/bddq5u/os_secur...
Also why not recognize the strides Wayland made for security in the desktop? Very handwavey take.
And yeah, I don't understand his hate on Flatpak unless he means the sandbox is too user-hostile. So many things break with Flatpaks, from native messaging hosts (think browser extension talking to desktop application) to theming to local folder access to pasting from the buffer.. it's quite a draconian sandbox.
1. When Linux was "coming of age" (around the turn of the millennium), Windows security was really bad.
2. At the same time, there were far more Windows machines than Linux
3. #1 and #2 together made Windows such an attractive target, that exploits for Linux were less of a thing.
"not ok" by LGPL licence so an issue for games,etc that wanted closed source so people talked about it, but a malicious worm creator could definitely go there since they probably wouldn't care about (L)GPL compliance (remember Linus has always been very hard on patches not breaking userspace ABI's).
This was annoying for exploit authors as well. Like an early form of ASLR.
I remember an ad that showed someone who had painted themselves into a corner with paint the shade of purple that Sun used in its branding with the slogan something like "we have the way out."
The impact of the average Windows exploit was higher than the average Linux exploit because non-NT Windows didn’t use best practices such as multiple accounts + least privilege. And for years there were daemons on Windows that would get exploited with RCE just idling on a network (eg. Conficker).
It took Microsoft several years of being a laughing stock of security before Bill Gates made the decision to slow new feature development while the company prioritized security. Windows security then increased. I believe that was around the time that Microsoft started reusing the NT kernel in the consumer versions of Windows.
Also, the myth ignores the fact that cybersecurity risk has components of likelihood (a percentage) and impact (an absolute number, sometimes a dollar value). This conflation of two components invites lots of arguments and confusion, as commonly seen when certain CVEs are assigned non-obvious CVS scores.
Much later. Windows XP used the NT kernel in 2001, whereas Conficker was in 2008.
The ratio bug/feature is what matter (systems with no bugs but no features are nice but we have work to do)
User namespaces opened up an enormous surface area to local privilege escalation since it made a ton of root-only APIs available to non-root users.
I don't think user namespaces are available on android, and it's sometimes disabled on linux distributions, although I think more are starting to enable it.
Relatedly, kernelCTF just announced they will be disabling user namespaces, which is probably a good idea if they are inundated with exploits: https://github.com/google/security-research/commit/7171625f5...
Public CTFs are hard because inevitably some team will try something resembling a DDoS as part racing to the finish.
Google removed the proof of work step after this.
Just seems weird for it to be a race.
Just when I was starting to wonder if we finally had a chance to thwart L7 attacks using these PoW tricks.
> ...and despite supporting it [AVX512] on consumer CPUs for several generations...
I dunno. Before Rocket Lake (11th gen) AVX512 was only available in those enthusiast cpu, xeon cpu or in some mobile processors (which i wouldn't really call consumer cpu).
With the 12th gen (and that performance/efficiency core concept), they disabled it after a few months in those core and it was never seen again.
I am pretty sure tho, after AMD has some kind of success with AVX512 Intel will reintroduce it again.
btw. I am still rocking an Intel i9-11900 cpu in my setup here. ;)
The 12th gen CPUs with performance cores didn't even advertise AVX512 support or have it enabled out of the box. They didn't include AVX512 on the efficiency cores for space reasons, so the entire CPU was considered to not have AVX512.
It was only through a quirk of some BIOS options that you could disable the efficiency cores and enable AVX512 on the remaining CPU. You had to give up the E-cores as a tradeoff.
Not really, no. OS-level schedulers are complicated as is with only P vs E cores to worry about, let alone having to dynamically move tasks because they used a CPU feature (and then moving them back after they don't need them anymore).
> and honestly probably could have supported them completely by splitting the same way AMD does on Zen4 and Zen5 C cores.
The issue with AVX512 is not (just) that you need a very wide vector unit, but mostly that you need an incredibly large register file: you go up from 16 * 256 bit = 4096 bits (AVX2) to 32 * 512 bit = 16384 bits (AVX512), and on top of that you need to add a whole bunch of extra registers for renaming purposes.
Not necessarily, you need to behave as if you had that many registers, but IMO it would be way better if the E cores had supported avx512, but half of the registers actually didn't exist and just were in the L2 cache.
Or if Intel really didn't want to do that, they needed to get AVX-10 ready for 2020 rather than going back and forth on it fore ~8 years.
That'd be an OS thing.
This is a problem that has been solved in the mainframe / supercomputing world and which was discussed in the BSD world a quarter of a century ago. It's simple, really.
Each CPU offers a list of supported features (cpuctl identify), and the scheduler keeps track of whether a program advertises use of certain features. If it does want features that some CPUs don't support, that process can't be scheduled on those CPUs.
I remember thinking about this way back when dual Nintendo cartridge Pentium motherboards came out. To experiment, I ran a Pentium III and a Celery on an adapter card, which, like talking about self hosting email, infuriated people who told me it can't be done. Different clock speeds, different CPU features, et cetera, worked, and worked well enough to wonder what scheduler changes would make using those different features work properly.
[1] - https://cdrdv2.intel.com/v1/dl/getContent/784343 (PDF)
1. https://dpitt.me/files/sime.pdf (hosted on my domain because it's pulled from a journal)
2. https://github.com/aws/aws-lc/blob/9c8bd6d7b8adccdd8af4242e0...
Separately, I'm wondering whether the carries really need to be propagated in one step. (At least I think that's what's going on?) The chance that a carry in leads to an additional carry out beyond what's already there in the high 12 bits is very small, so in my code, I assume that carries only happen once and then loop back if necessary. That reduces the latency in the common case. I guess with a branch there could be timing attack issues though
One can also upload to archive.org: https://archive.org/download/sime_20250531/sime.pdf
So it may be actively encouraging the "wrong" thing as well.
And at the same time you don't want to piss the community, so here we go.
Also, attack surface reduction is a very valid strategy, so it may seem like about the userspace (sandbox for every apps etc) but it could make a big different in how much of the kernel attack surface is exposed.
There's a lot to be said that "Distro kernel config choices may not be as secure as possible", but that's not really an "Android"/"Vanilla Linux Kernel" difference.
The previous in-kernel CFI implementation (before the kinda joint effort - kCFI) was upstreamed by Google, too: https://www.phoronix.com/news/Clang-CFI-Linux-Patches and https://www.phoronix.com/news/Linux-Kernel-Clang-LTO-Patches. Pixel devices also had this long before. Given that the entire Linux kernel feature was developed out of Android I find it a little bit unfair to call it "using a vanilla kernel feature".
Arguing "Who first added a feature" seems to be a losing spiral of needless tribalism. How many features does the Android kernel use that weren't developed by Google? Does that mean they wouldn't have developed those features? Or just that there's no point making a parallel implementation if there's already one existing.
(*) For example, Android uses SELinux to confine apps, virtual machines (pKVM) to run DRM code, and so on. All these increase the overall security of the system and decrease the cost of kernel bugs, so there's a tradeoff that's not easy to evaluate.
The proof of work isn't there to add a "four second race". It's to prevent ddos like spam.
And the spreadsheet is public [0], I counted 7 unique hoarded bugs (submitted immediately after launch) after deduplicating by patch commit hash. Then in the month following, another 9 unique bugs were submitted.
As for how much paid, I don't remember. Could be around $200~300k in total.
[0] https://docs.google.com/spreadsheets/d/e/2PACX-1vS1REdTA29OJ...
basically nextjs + tailwind + mdx
The previous submission the author describes as being some expensive FPGA one was 4+ seconds. You'd think he'd mention something about how second place on his week was potentially the second fastest submission of all time, no?
davidfiala•1d ago