All of big tech wins: CPUs get slower and we need more vcpu's and more memory to serve our javascript slop to end customers: The hardware companies sell more hardware, the cloud providers sell more cloud.
edit: “burstable” CPUs are a fourth category relying on overselling the same virtual CPU while intelligently distributing workloads to keep them at 100%.
Can't just turn off hyperthreading.
Side Note: Folks, don't run EOL operating systems at home. Upgrade to Linux or BSD, and your hardware can live on safely.
Especially not EOL Windows.
That's not something I'd easily associate with a step forward in security.
What kind of workloads have noticeably lower performance with VBS?
Overhead should be minimal but something is preventing it from working as well as it theoretically should. AFAIK Microsoft has been improving VBS but I don't think it's completely fixed yet.
BF6 requiring VBS (or at least "VBS capable" systems) will probably force games to find a way to deal with VBS as much as they can, but for older titles it's not always a bad idea to turn off VBS to get a less stuttery experience.
QEMU can also use WHP via --accel whpx.
[0] - https://techcommunity.microsoft.com/blog/virtualization/vmwa...
[1] - https://www.impostr-labs.com/use-hyper-v-and-virtualbox-toge...
https://news.ycombinator.com/item?id=44805565 Secure Boot is a requirement to play Battlefield 6 on PC
> It's the Javelin Anti cheat system which forces the use of secure boot
Are these performance hit numbers inclusive of turning off the other mitigations?
The RISC-V ISA has an effort to standardize a timing fence[1][2], to take care of this once and for all.
0. https://tomchothia.gitlab.io/Papers/EuroSys19.pdf
1. https://lf-riscv.atlassian.net/wiki/spaces/TFXX/pages/538379...
It doesn't change the fact that when you implement a RISC-V core, you're going to have to partition/tag/track resources for threads that you want to be separated. Or, if you're keeping around shared state, you're going to be doing things like "flush all caches and predictors on every context switch" (can't tell if thats more or less painful).
Anyway, that all still seems expensive and hard regardless of whether or not the ISA exposes it to you :(
i.e. not about reducing cost, but about being able to kill timing side channels at all.
So I should probably post something more realistic, and compare against the old mitigations. This will make ASI look a LOT better. But I'm being very careful not to avoid looking like a salesman here. It's better that I risk making things look worse than they are, than risk having people worry I'm hiding issues.
[0] https://cyberscoop.com/cloud-security-l1tf-reloaded-public-c...
Before doing this though, you need to be sure that ASI actually protects all the memory you care about. The version that currently exists protects all user memory but if the kernel copies something into its own memory it's now unprotected. So that needs to be addressed first (or some users might tolerate this risk).
api•5mo ago
Honestly running system services in VMs would be cheaper and just as good, or an OS like Qubes. VM hit is much smaller, less than 1% in some cases on newer hardware.
riedel•5mo ago
eptcyka•5mo ago
traverseda•5mo ago
shortrounddev2•5mo ago
api•5mo ago
jeroenhd•5mo ago
Probably works best running VMs with the same kernel and software version.
infogulch•5mo ago
> However, while KSM can reduce memory usage, it also comes with some security risks, as it can expose VMs to side-channel attacks. ...
gpapilion•5mo ago
The protection here is to ensure the vms are isolated. Without doing this there is the potential you can leak data via speculative execution across guests.
russdill•5mo ago
bjackman•5mo ago
If the community is up for merging this (which is a genuine question - the complexity hit is significant) I expect it to become the default everywhere and for most people it should be a performance win Vs the current default.
But, yes. Not there right now, which is annoying. I'm hoping the community is willing to start merging this anyway with the trust we can get it to be really fast later. But they might say "no, we need a full prototype that's super fast right now", which would be fair.