So to sum up. Valorant's anti-cheat, which the author sees something like an ideal solution:
- starts up and loads its kernel driver on boot.
- generates a persistent unique ID based on hardware serial numbers and associates this with my game account.
- stays active the entire time the system is up, whether I play the game or not. But don't worry, it only does some unspecified logging.
- is somehow not a spyware or data protection risk at all...
For anyone saying “just do server side,” no, it’s physically impossible to stop all cheating that way until we have internet faster than human perception.
Don't let perfect be the enemy of good.
The problem is that server-side occlusion is only a small piece of the puzzle. A naïve implementation means hundreds of thousands of raycasts per second, which doesn’t scale. Real engines rely on precomputed visibility sets, spatial partitioning, and still have to leak some data client-side for responsiveness.
Basically - the kernel level check is not laziness, but for unsolvable problems without huge compute costs or latency.
Hundreds of thousands of raycasts per second sounds doable to me, but couldn't you just use a GPU and some simplified level geometry? That ought to scale well enough. It's not free or perfect (knowing the position of a hand a cheat will be able to estimate where the head is anyway), but that's not the goal, right?
It's cat and mouse game.
BasicallyHomeless has made it his life mission to eradicate cheating in video games.
I think you already know the answer. Yes, it's bottlenecked by latency and jitter (of the laggiest player, no less), and in addition to that the maximum possible movement velocity makes it much much worse in fast paced games. It's been attempted a few times since at least late 90's, with predictable results.
In other words, complete server-side calculations are a fantasy. Besides, they won't even remotely make cheating impossible or even harder! Even complete hardware lockdown won't.
The naive raycast from player camera to other player would be fine for perf but may count partially visible as invisible, so its unacceptable. You'd have to raycast every pixel of the potentially visible player model to stay conservative. With movement + latency this expands to every pixel the player model could potentially occupy during your max latency period, and you need to consider the viewer moving too!
In practice this expands to a visibility test between two spheres with radius max_latency*max_movespeed + player_model_radius. Now, you could theoretically do a bunch of random raycasts between the spheres and get an answer that is right some of the time, but it would be a serious violation of our conservativeness criteria and the performance would get worse with more rays/better results. Also keep in mind that we need to do this for every single player/player pair a few dozen times per second, so it needs to be fast!
To do this, you need a dedicated data structure that maps volumes to other volumes visible from said volume. There are a few, and they are all non-trivial and/or slow to build well. (google for eg. potentially visible set, cell-portal graph + occlusion). You also trade performance for precision, and in practice you walls might become 'transparent' a bit too early. With all this being done, we can actually "do occlusion calculations server-side".
There's just one problem with this that I still don't know a solution for, namely precision. With fast players and imprecise conservative visibility, things you care about are going to count as visible pretty often, including stuff like enemies peeking from behind a corner (because they could have moved at full sprint for 100ms and the end of the wall is rounded away in your acceleration structure anyway) so all this complexity might not get you that much, particularly if your game is fast paced. You'd prevent some wallhacks but not the ones that really matter.
TLDR yes, it's actually hard and might not be good enough anyway
I've seen videos where cheats are particularly easy to detect if you are also cheating. I.e. when you have all the information, you can start to see players reacting to other players before they should be able to detect them. So it should be possible to build a repertoire of cheating examples and clean examples using high level players to catch a fair amount of cheating behavior. And while I understand that there are ways to mitigate this and its an arms race, the less obvious the cheats are, the less effective they are, almost by definition.
If someone is consistently reacting outside the range of normal human reaction times, they're cheating. If they randomize it enough to be within human range, well, mission accomplished, kind of.
If they're reacting to other players in impossible ways by avoiding them or aiming toward them before they can be seen with unusual precision or frequency, they're cheating.
A lot of complex game dynamics can be simplified to 2D vectors and it shouldn't be that computationally intensive to process.
The first is "never trust the client", i.e. realtime validation and having the server be the sole authority on the current game state. This is the straightforward solution to think of for programmers, but it's also practically infeasible due to latency, etc.
But what the server could do is a "trust but verify" approach: accept data from the clients when they submit it, but have some background processes that can analyze the data for anomalies and, if too much of it was detected, trigger a ban.
The only problem I see with this approach is that cheaters might react by repeatedly making new accounts and playing as them until the verification process has caught up and bans the account.
Cheating would be more obvious - as cheaters would have to start over with a beginner character every time - but it could still be annoying.
So the problem of ban evasion would become even more important. And I don't really see how a purely server-side solution could work there.
Don't worry, it's owned by Tencent.
Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.
Probably the only workable solution is for windows to provide some kind of secure game mode where the game and only the game runs and can have windows attest nothing else is running. But that anti cheat has no access to the data in the real work OS which is currently not running. Ruins multi tasking, but assuming you can switch over fast enough it might not be too bad.
doesn't actually stop all cheaters.
We could have a better discussion around this if we recognize that failing to stop 100% of something isn't a prerequisite to rigorously evaluating the tradeoffs.
I'd argue the potential for abuse is a perfectly reasonable discussion to have, and doesn't have much bearing on the effectiveness of anticheat, but I understand that's not the point you are trying to make.
I didn't claim we should trust the company. Whether we can trust the anticheat maker is certainly part of the rigorous evaluation of the tradeoffs I mentioned. My point was that saying "it doesn't stop cheaters" is both incorrect and stifling to a more productive conversation, because it implies anticheat has no value and is therefore worth no risk.
As for me, if Gabe said "now you can opt your Steam Deck in to a trusted kernel we ship with anticheat and play PUBG," I'd probably do it. But that's because I, for better or worse, tend to trust Gabe. If Tencent were shipping it, I'd probably feel differently.
It is absolutely the case that there would be more cheating if we turned off the only partially effective systems. We know this because they are regularly stopping and banning people!
Cheating is a big draw to Windows for semi-pro gamers and mid streamers. What else is there to do except grind? Windows gives the illusion of "kernel level anti-cheat," which filters out the simplest ones, and fools most people some of the time.
For instance, a common cheat in Street Fighter 6 is to trigger a drive impact in response to the startup of a move that is unsafe to a drive impact. That is recognizing the opponent's animation and triggering an input. There's no part of that which cares where the game simulation is being done. In fact, this kind of cheating can only be detected statistically. And the cheats have tools to combat that by adding random triggering chances and delays. It's pretty easy to tune a cheat to be approximately as effective as a high-level player.
Kernel-level anticheat isn't a perfect solution, but there are people asking for it. It would make cheating a lot harder, at least.
As does Valorant and virtually every other first person shooter. The cheats aren't people flying around or nocliping, it's wallhacks and aim assists/bots.
I can move and reveal what's behind a corner a lot faster than a network roundtrip, so either the server needs to give some advance warning or you're going to see enemies pop into existence suddenly.
And computing if somebody is almost visible isn't trivial either. Level geometry can have narrow openings such as holes in a wall. Or what if somebody jumps?
And that's before getting into non visual information. It's not perfect, but you could still add a significant advantage by drawing the exact location of footsteps.
So yeah, (some) games try, but network latency means the client needs some information a wallhack can use, and the alternative: being killed by an enemy that was invisible is at least as frustrating as being killed by a cheater so the visibility estimate has to be generous.
Correct. Unfortunately, what you've just described is a gaming console rather than a PC. This problem fundamentally undermines the appeal of PC gaming in a significant way, imo.
Yes, game publishers are trying to turn PCs into a gaming console, which IMO will always be a futile effort, and is quite frankly annoying. I don't game on PC to have a locked down console-like experience.
Just embrace the PC for what it is and stop trying to turn it into a trusted execution platform with spyware and rootkits.
Look at BF6 - for all the secure boot and TPM required anti-cheat they stuffed it with, there were cheaters day 1, so why abuse your users when it's clearly ineffective anyway.
The game companies keep saying these things are necessary, yet they don't fully do the very thing they claim to do on the label.
Can't help but ask myself sometimes... why would users want to pay in the first place, for the content of someone who invests more money and leverage that some people see in their entire lives, in delivering user-hostile technical countermeasures that most of the time are ultimately futile?
What is the so valuable thing that one is supposed to get out of the work of someone who treats their audience this way, awesomely as their stuff might've been made? That's what doesn't make the most sense to me. But then I remember how most people aren't very intentional about most of their preferences and will accept whatever as long as it's served by an unaccountable industry into everyone's lives at the same time in a predictable manner, and I despair.
Of course the argument falls flat on multiple levels: It ignores other ways to prevent cheaters, like server-side detection or maybe developing a gameplay that is not based on channeling masses of anonymous strangers through the game world. It ignores that it doesn't actually solve the problem of cheaters. And it ignores that many games use anticheat for reasons that don't have to do with multiplayer at all, e.g. to keep players from bypassing in-game purchases.
Where with Anti-cheat and DRM only the 'good guys' get hit, since the 'bad guys' don't follow "the law" anyways.
A 14 year old who installs an autoclicker to mess with friends or randoms online I can get. But there are fully grown adults who dedicate their time and substantial amounts of money (whole second computer) just to win in online video games?
What's the motivation/justification for spending hundreds or even thousands of dollars on cheating hardware and software? Are these just super-rich people who have more money than sense?
No doubt there are various reasons, some more understandable than others. There are some fascinating historical cases, like the one explored in "The King of Kong" :
Which is well worth a watch, if you're curious.
b) should’ve specified this is the bigger problem. glad to see from the other comment bf6 is coming on-board, but VALORANT doesn’t and that’s probably the quintessential title for this.
Wouldn't it be sufficient to simply have a minimal system installed on a separate partition or on a separate drive (internal or external). Boot that for gaming, and never give it the password for the encryption of your non-gaming volumes.
Yes, and at that point, you may as well use Windows for that machine.
But anti-cheat hasn't been about blocking every possible way of cheating for some time now. It's been about making it as in convenient as possible, thus reducing the amount of cheaters.
Is the current fad of using kernel level anti-cheats what we want? hell nah.
The responsibility of keeping a multi-player session clean of cheaters, was previously shared between the developers and server owners. While today this responsibility has fallen mostly on developers (or rather game studios) since they want to own the whole experience.
The idea that we should allow arbitrary code execution at some point, then we claw back security by running mass surveillance on your PC is clearly insane.
The only way to go forward is what BF6 has done - ensure the PC is in a pristine state, and nothing bad was loaded in the kernel - which is ironically why their anticheats conflicted - they don't allow loading random crap in the kernel.
Not to mention, people who develop these invasive security modules don't have the expertise, resources or testing culture to muck about in the kernel to the degree they do.
As to how dangerous this actually got actually showcased by Crowdstrike last year.
Microsoft doesn't do any auditing besides "is this the most obvious malware?"
IIRC, even Microsoft was getting fed up with hands in the kernel after Cloudstrike so we may see it disappear eventually if Microsoft starts cracking down.
I oppose kernel-level anticheat because once it's in place, it will proliferate, even to single player games, just as it has in Windows.
In other words, once it's broadly supported, the number of games available to me (assuming I want to avoid kernel-level anticheat) will actually _shrink _.
But the alternative is cheaters in the game, which your point doesn’t really address. So for many it is a necessary evil, so to speak.
This is a reasonable stance because these things are fundamentally at odds and can't be reconciled on one machine. Either you have an open hackable system, where security comes from cryptography and transparency, or you have a locked down system where security comes from inaccessibility and obscurity.
1) There is a 100k bug-bounty on the anti-cheat: https://hackerone.com/riot?type=team
2) The anti-cheat is the game's entire reason for being. It is the main focus of the development and marketing. People buy Valorant for the anti-cheat; they are willing to accept a kernel driver as a trade off for fairer competition.
Fair competition is all well and good, but there are other ways to do it and I can already tell you that the war on kernel-level anti cheat is well under way. There are already people cheating in Valorant, and that will not slow down. If anything, it's going to get more common because cheaters and cheat creators are some of the most diligent people out there.
No? In which case, what practical spyware risk does a kernel level driver add that user mode software can’t do?
User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system. That spooks me enough that, if I don’t trust a software manufacturer, I don’t install it. Kernel mode makes no practical difference in my security posture.
Not on any properly secured Linux machine. But yes, it's generally a bad idea to install software you don't trust, a category that anticheats slot nicely into, given their resistantance to auditing and analysis.
- Creating a unique ID that is directly bound to hardware.
- Accessing the memory of any process, including browsers or messengers.
- Installing persistent background processes that are hidden from the rest of the system.
But I think that's the wrong question. Talking about the kernel driver is a distraction.
The abuse scenario that I think is most likely would be that the game and/or anticheat vendor uses the hardware ID for user profiling instead of just ban enforcement, and that the "logging" functionality is coopted to detect software or activities that aren't related to cheats at all, but are just competition of the vendor or can once against be used for profiling, etc.
None of that strictly requires a kernel driver. Most of that stuff could be easily done with a usermode daemon. But under normal circumstances, there is no way I'd install such a program. Only in the name of cheat prevention, suddenly it gets permissible to make users install that stuff if all they want to do is play some game.
Yes.
And you would rightly tell them to piss off and get out of your house, because that makes no sense. If you really wanted to torture the metaphor, you could I guess argue that they need full access to your house just in case you decide to pull some loaded dice out of the filing cabinet or something, but that's not really the important thing to me. The important thing is that, regardless of whether or not I trust the developer of the anti-cheat, the game just isn't that important.
https://www.youtube.com/watch?v=RwzIq04vd0M
It seems to me that kernel-level anti-cheat is little more than a speed bump for determined cheaters.
Obviously, our personal priorities differ. That's fine, but yours don't invalidate my earlier point.
By the way, it's never just one determined cheater. Once discovered, circumvention techniques get shared, just as with mod chips and exploit scripts. It's only a matter of time before anyone willing to do a little reading or buy a little hardware can use them. And they do. (Often on alt accounts, with no fear of getting banned.)
In other words, any relief from game cheaters is bound to be temporary, while harm from spyware or exploit is irreparable to anyone who values the privacy of their data.
This is why kernel-level anti-cheat systems are so widely criticized. They might make sense on dedicated gaming machines, where the risks are low, but the situation is very different on general-purpose computers.
Because somehow Proton is better than standing for actual GNU/Linux games.
So like IBM with OS/2 and Windows, studios keep ignoring Linux, and let Valve do whatever is needed, it is Valve's problem to sort out.
Is the memory of this kernel module protected from access from another kernel module ?
Which obviously causes all kinds of issues, and violates both freedoms 0 and 1 https://www.gnu.org/philosophy/free-sw.en.html
And they don't just remove those freedoms regarding the game, but for the entire system.
They do not, as long as you can disable the anti-cheat and reboot.
Even if the game itself doesn't grant me that freedom, my OS and drivers should not prevent me from attaching a debugger to the game without it noticing.
My computer, and the software on it, should obey me, and me alone. Never should they obey a developer's desire to restrict what I can and cannot do.
That is the ideological basis of the free software movement, and as you may have noticed, incompatible with client side anticheat.
I'd end up in court if I gave a random game developer root permissions on the same system that I use for client projects. But installing a kernel module is fine?
If the valorant module wanted, it could intercept anything from that point on. It could intercept me trying to uninstall it, and pretend it had been removed, while just hiding itself. It could intercept any debugging I'd be trying to do, and feed me false data.
That's why I don't use proprietary kernel modules, and never run proprietary code with root permissions.
And I shouldn't have to. Games don't need client side anticheat.
Why do even many single player games now ship with anti-cheat? Because they want to protect their lootboxes and microtransactions.
And even competitive games don't need client side anti-cheat. Most games are perfectly fine with a well-written server-side anticheat, and the ones that don't work fine if you host a private server with people you know.
No other part of IT would ever trust the client. Giving the client information they shouldn't have is an instant CVE, and so is relying on client-side validation.
But client-side anticheat is cheaper, and matchmaking increases engagement, so alternatives are dismissed.
I don't want to play with randoms. Even in mmorpgs I prefer finding a group via the zone chat, which also encourages finding a guild and making friendships, over playing with randoms. Especially if the matchmaking doesn't even take party roles into account.
So why should I break my clients' trust to give control of my system to someone I don't know to install software I don't want just so I can play a game with matchmaking just because the developer didn't want to pay for proper server-side anticheat?
Genshin's anticheat was used to install ransomware, ESEA's anticheat was used to install bitcoin miners on users machines, EA's anticheat was used to hack clients computers during a tournament, etc.
When not explicitly malicious, anticheat software is at best spyware that's spying on your computer use to identify cheating. People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online. And even if the company isn't trying to use that info to spy on you, my understanding is that when you're a chinese company, you have to give full access of that data to the government.
With the ongoing/rising tensions between the US and China, I actually think there's a significant chance that we may see all Chinese owned anticheat programs banned in the US, which would be pretty significant since they own or partially own the majority (as far as I know).
Well, I don't think anyone reasonable should be telling others what they "should" be ok with, myself included (I made an exception this one time).
> Genshin's anticheat was used to install ransomware
You should tell the full story: Ransomware installed Genshin's anticheat because it was whitelisted by antivirus providers, it then used the anti-cheat to load itself deeper into the system. So not really a problem with Genshin's anticheat (indeed, users who had never played the game or even heard about it would be affected), but a problem with how antivirus providers dealt with it.
> ESEA's anticheat was used to install bitcoin miners
You should tell the full story: Someone compromised the supply-chain and snuck a miner into the anticheat binary. It was discovered immediately, and the fact that the miner was in the anticheat and not, say, a game loader, did nothing to hide it.
> People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online
This is just a fallacy. Like saying "people voted for candidate A, but then they voted for candidate B!" Obviously, there can be multiple groups of people, and saying that "people" vaguely support X but not Y is usually a misunderstanding of the groupings involved.
The obvious explanation for this is"apparent" contradiction you point out is: Windows Recall is likely to be an on-by-default feature, and people don't really trust Microsoft not to "accidentally" enable it after an update. Also, Recall would likely be installed on all computers, not just gaming PCs. That's a big deal. A lot of people have multiple PCs, because they're cheap and ubiquitous these days. Maybe they're okay with recall and/or anticheat taking snapshots of their gaming PCs, but not the laptop they use to do their taxes, etc. The source of your confusion is likely the misunderstanding that most people, unlike the HN crowd, are practical, not ideological. They don't oppose anticheat on some abstract level, they care about the practical reality it brings to their life.
Another element is that most people, at least in the US, have "spy fatigue". They figure, hey, the US government spies on me, the five eyes spies on me, Russia and China spy on me, what does it matter?
The distinction doesn't really matter. The claim wasn't that the ransomware authors exploited deficiencies in the anticheat design, just that the anticheat was used to install the ransomware, which it was.
Software with that level of access having a supply chain compromise is not an argument in its defense.
Alas, I'd like to believe we could be in an era of "hey, not a problem, just have a dedicated gaming machine," but that too is difficult.
You can do this on macOS too, by the way. XNU is open-source.
We can run tasks on them that only produces valid output if the boot chains is verified.
How would one get the modified XNU past the verified-boot process? Turn off verified boot?
It's much harder to cheat if the game isn't running on your computer.
The ultimate "anti-cheat" is playing on some trusted party's computer. That can be a cloud machine, but I think today a game console would work just as well, turn that closed nature into an actual user-facing benefit. Console manufacturers seem focused on their traditional niche of controller couch gaming and not on appealing to high-FPS keyboard-and-mouse gamers, though.
It doesn't even seem very hard to implement, steam already has the ability to stream games, they could add this pretty easily as an option for any game (although there is the concern of the extra cost of running the servers).
That shouldn't be a problem if all players, regardless of the OS, are required to use the same cloud service with similar latency.
XIM fakes being a controller but is KBM. I sort of wonder whether it’s possible to use a camera to get a stream of the game and make an aimbot either by making a fake controller or a robot that manipulates a real controller.
True that wallhacks aren’t possible via peripherals, though. You might be able to get some level of info from the audio output and map knowledge, but nowhere near the same as true ESP.
To be fair kernel anticheat can't block this completely either, it can be run on external hardware that uses a capture card to analyze your video feed and alter your mouse inputs to the computer. Generally undetectable unless the game is able to identify unnatural mouse movements.
I think at some point defeating this becomes impossible. This sort of cheating isn't much different conceptually from just having someone who's really good at the game play for you.
Not if only the rendering is done on the client. Look at rocket league.
Edit: of course, it is still possible to cheat in rocket league, but because all physics state is server authoritative at best a perfectly coded cheat could play like a perfect human, not supernatural.
Of course, to TFA's point on network code... a lot of the issues in question could come down to checking for movements that exceed human... moving faster than the speed in game, or even twitch aiming movements faster than a mouse, or a consistent level of X accuracy in shooting over time. On the last part, I'm not sure if there might be some way to mask a user's hit zone, rendering and such so that an aim-bot thinks the foot is center-mass, etc. Or if it could be randomly shifted in a test scenario.
I think the more important question isn't how you implement an anti-cheat, it's why some types of games attract cheaters.
When victory in a game isn't about strategy but just about how quickly you can click o character's head, and just by doing it once you win the game, that makes the whole game a clear target for cheating. Everyone cheats as the sniper, nobody cheats as the medic.
I think you could make an FPS that cheaters hate by designing it so that it requires at least 2 players to defeat a player on the opposite team, e.g. by giving everyone weapons of different type and needing two types to defeat an enemy.
I wonder if anti-cheating game design is a thing?
Game designers could have just worked on their ranking systems, and least the cheaters rocket off into their own domain of impossibly-high-elo games. Let there be a cheaters league. It could be fascinating, what’s fully-cheated gameplay look like? Just ban disruptive behavior like ddosing other players.
OTOH, artificially lowering your rank to stomp low-level players is a problem. But cheaters, as well as just legitimately really good players, can do this; the place to solve this is the ranking system.
To put it in another way: either I'm bad at a competitive game, or I'm playing against cheaters. Once you start feeling like that, neither scenario seems like an enjoyable time, so why play at all?
I feel like the biggest problem to me is that these types of games are INSANELY popular, but personally I'd rather play something less skill-based and more fun-based. These competitive games just keep appearing in front of me all the time despite that fact I don't enjoy them.
Of course, I still remember seeing cheaters back then, in that game... usually quickly kicked off the server you were playing on.
Even a hacked kernel won't have access to the key material stored inside of the TPM, though, so it wouldn't be able to fake the remote attestation key material used to sign any challenges.
Using TPMs this way requires secure boot which only permits non-exploited, signed kernels to load signed operating system images and signed drivers. Revocation of exploitable software and hardware must be harsh and immediate. That means most dTPMs (which have been proven vulnerable to numerous side-channel attacks) are unusable, as well as some fTPMs from CPUs running old microcode. Several graphics cards cannot be used anymore because their drivers contain unpatched vulnerabilities. Running tools with known-exploitable drivers, such as CPU-Z and some motherboard vendor software, would imply a permanent ban.
This approach can work well for remotely validating the state of devices in a highly secure government programme with strict asset management. For gaming, many hardware and software configurations wouldn't be validatable and you'd lose too much money. Unfortunately, unlike on consoles, hardware and software vendors just don't give a shit about security when there's a risk of mild user inconvenience, so their security features cannot be relied upon.
You can do what some games do and use TPMs as your system's hardware identifier, requiring cheaters to buy whole new CPUs/motherboards every time an account is banned. You can also take into account systems like these but don't rely on them entirely, combining them with kernel-level anticheat like BF6 does (which requires secure boot to be enabled and VBS to be available to launch, though there are already cheaters in that game).
And while the kernel is quite secure against hacks from userspace, the hardware interfaces are generally more trusted. This is not a problem on smartphones or embedded devices where you can obfuscate everything on a small SoC but the whole PC/x86_64 platform is much more flexible and open. I doubt there is a way to get reliable attestation on current desktop systems (many of which are assembled from independent parts) unless you get complete buy-in from all the manufacturers.
Finally, with AI systems recently increasing in power, perhaps soon the nuclear option of camera + CV + keyboard/mouse will become practical.
I'm pretty sure GRUB is infamous now for being a source of secure boot bypasses.
It's the sort of thing I think Valve will do with the Steam Deck eventually.
https://playvalorant.com/en-gb/news/dev/vanguard-hits-new-ba...
Of course the cheat developers don't sit idle, so this is far from over.
Anti-cheat does not ordinarily like to run inside a VM, because then the hypervisor can do the cheating, invisibly to the kernel. However, technologies like AMD SEV can (in theory) protect the guest from the host, using memory encryption. (And potentially also protect from DMA-based cheats, too)
What you'd need is some way for the hardware to attest to the guest "yes, you really are running inside SEV".
In theory you could probably get it to work on some hardware given some boot configurations with some games, but what game developer is going to develop a bespoke Linux VM? And if not the game developer, what Linux developer is going to spend time developing a platform that caters to the wishes of closed-source, rootkit-driven anticheat developers?
That doesn't seem right. Hypervising is not a feature many consumers use, so why would they spend the money to include it in consumer chips?
Besides that, these aren't area-heavy features; it's cheaper to share the core design and just have the feature available anyways than to design it out.
I'm largely a console gamer, so I don't have to worry about EA's latest malware opening my computer up to the world. I'm also a filthy casual though.
---
Let me ask you a question. How many vulnerable drivers (yes, those that can be abused by bad actors to gain kernel access) do you think the average gamer has on their Windows install? I’ll start with my own system. This is what I can immediately think of:
• MSI Afterburner - RTCore64.sys driver (yes, even in the latest version) has a vulnerability that allows any usermode process to read and write any kernel memory it wishes
• CPU-Z - cpuz142_x64.sys driver has (again) kernel memory read/write vulnerability and MSR register read/write
If I looked hard enough, I would most likely find more.
And if you're not doing something particularly sensitive, then security on consumer PCs must matter a lot less than some people think.
The problem with these is actually worse. Any program with the necessary permissions can load these drivers. Some malware likes to ship known-vulnerable drivers with one of their later stages to get kernel code execution, and Microsoft doesn't want to revoke the signatures of this malware because applications and hardware will stop working.
You don't nee CPU-Z to be installed, you just need to run a program that decided to bundle the (old) CPU-Z driver.
Anticheat is only hard because people are looking for a technical solution to a social problem. The actual way to get a good game in most things is to only play with people you trust and, if you think someone is cheating, stop trusting them and stop playing with them.
This doesn't scale to massive matchmaking scenarios of course - and so many modern games don't even offer it as an option - so companies would have to give up the automatic ranking of all players and the promise of dopamine that can be weaponised against them, but it works for sports in the real world and it worked for the likes of Quake, UT, etc. so I don't think it's a necessarily bad idea. Social ostracism is an incredibly powerful force.
However, it does mean that the big publishers wouldn't have control over everything a player does. Getting them to agree to that is probably the real hard problem.
However, I wonder if you could have that while still removing features that make cheating seem appealing. For example, as you said, you can have games with randoms without an automatic ranking of all players. (Or maybe you rank players so you can match people of similar skill levels, but you don't tell anyone what their rank is.)
Works basically the same as matchmaking does now, albeit in only matching on server quality and not player skill.
This does not stop cheaters whatsoever. Anyone who played during the private server era of FPS in the late 90s/early 00s knows this; wallhacking, modified character models with big pointy spikes indicating player locations, aimbots, etc. ran rampant, even when nothing was on the line.
Good skill matching is one of the most important advancement in gaming over the last few decades. Being able to consistently play against people who are fair competition for you makes the games so much more fun, especially if you are much better or much worse than the average player. In the old days, you could alternate between opponents that were no challenge at all and opponents you would have no chance against; both types of games get old really fast.
In some ways, good skill matching can alleviate the harm cheaters do; if the cheating makes them way better than everyone else, then good matchmaking should start to match them up only against other cheaters. In many ways, this is the ideal scenario - cheaters play against each other, and everyone else plays against people who are close in skill level.
This is because many people get really upset when their displayed rating doesn’t go up after playing a lot of games, but with a skill level that is the expected result when you finally reach your true skill rating. Players won’t constant progression, so they will show a separate score that keeps going up while they use a hidden true rating for matchmaking.
The history of plenty of anticheats start with community servers, not matchmaking. Even Team Fortress Classic had enough of a cheating issue that community members developed Punkbuster, which went on to get integrated into Quake 3 Arena. A lot of 3rd party anticheats were developed in that era for community servers. BattlEye for BattleField games. EasyAntiCheat for Counter-Strike. I even remember Starcraft Brood War's 3rd party ICCUP server with 'antihack'.
You still see this today with additional anticheats on community server solutions. GTA V's modded FiveM servers had anticheats before it was added to the official game. CS2 Face-IT and ESEA servers have additional anticheats as people do not think VAC is effective enough.
For some games the small group approach works, but even a game as simple as Counter Strike requires at least a dozen players to make the most of.
That said, there are perverse incentives in many of the games hit worst by cheaters. Games that invent more and more prestigious rewards and titles for accounts that do well in hopes of them spending more money on microtransactions, or the microtransaction hell-holes like GTA Online that exist as a vessel to take your money more than to be of any fun. Adding upgrades and other desired items behind a gambling mechanic makes the whole ordeal extra shitty, praying on the psychological weaknesses of the unfortunate souls to get a digital gambling addiction so they can be sucked dry by billion dollar companies.
I've personally never run into anticheat issues because I find most of the games that require anticheat for online play just aren't worth the time and effort to play online in.
But still, the old SW Battlefront II wouldn't be fun without the massive online matches, and those require some form of anticheat to stay fun.
As much as I reminisce about the days of private servers for Quake/2/3, UT99, CS1.6, etc., saying this is really ignorant of how modern gaming and matchmaking works. Some games would simply not be possible without public matchmaking; I don't care how much of a social butterfly you are, you are not going to get 99 friends to get a PUBG match going. Even getting 11 other people to run a game of Overwatch or CS would be a pain. Other games need public matchmaking to have a fair ranking system. You go onto say ranking is "weaponised" but, ranking is a feature, and a lot of people like that feature.
> However, it does mean that the big publishers wouldn't have control over everything a player does. Getting them to agree to that is probably the real hard problem.
The demand for anticheat, and matchmaking/ranking systems, are entirely player-driven, not publisher-driven. If developers and publishers could get away with only implementing player-managed servers and letting players deal with cheaters, they would! It's a lot less work for them.
As a sibling comment mentioned, even in the days of private servers you ended up with community-developed tools like Punkbuster. I remember needing to install some anti-cheat crap when I signed up for Brood War's private ICCUP ladder.
If you listen to the people complaining about cheating... it doesn't.
> I don't care how much of a social butterfly you are, you are not going to get 99 friends to get a PUBG match going.
True, but my county is able to get more than that number of people into a cricket league. You don't need to personally know everyone, just be confident that there is a system of trust in place that would weed out any rotters. Is such a system going to be perfect? No, but neither are any of the top-down approaches attempted in videogames. At least this one doesn't require me to install an umpire in my home at all times.
> As a sibling comment mentioned, even in the days of private servers you ended up with community-developed tools like Punkbuster.
The difference is that you could have played the game without doing that. If you didn't trust the people on that server, how likely would you be to install those tools?
I played against the EVO 2025 world champion Street Fighter 6 player in ranked matchmaking last week. When's the last time your county cricket team played against anyone who's won the Cricket World Cup?
We're fundamentally talking about different activities here. Lamar Jackson doesn't get to choose who he plays against in the NFL; if he wants to win the Super Bowl he has to play against Joe Burrow. If Joe Burrow cheats by deflating some footballs, there has to be a system in place which catches him and doles out appropriate punishment. Your "solution" is essentially telling Lamar to not worry about it and just play flag football with his friends instead.
I realize this type of activity isn't for everyone, and there's something to be said about too many games becoming overly competitive, but your proposed solution doesn't really address the problem.
Please. They'll take our collective computing freedom if we don't keep these separate.
It seems so, and I think your example underlines this:
> Lamar Jackson doesn't get to choose who he plays against in the NFL; if he wants to win the Super Bowl he has to play against Joe Burrow. If Joe Burrow cheats by deflating some footballs, there has to be a system in place which catches him and doles out appropriate punishment.
I don't know who those people are, but I'll assume that this is a reasonable pairing of NFL players. Are you saying that there is no system in place to catch cheating in the NFL? Because I'm pretty sure that there is - it is just made out of people, rather than software.
Software anti-cheat seeks to stop everyone cheating everywhere, and this is clearly impossible. Using current anti-cheat methods in IRL sports, then in a game with as many involved participants as NFL a cheater might get away with it for a bit, but I'm sure if it turned out if the Steelberg Bunglers were deflating their balls every game, then this would be a massive scandal that makes national television. They would probably have to be audited (install anti-cheat) for a season or two before people would trust them to play a clean game for a while.
Squad has 100 player games, and despite its anticheat having well-known bypasses, I don't see a lot of hacked client cheating. Why? Because I play on servers that consistently have a couple people online during the hours I play that ban anybody who cheats.
Community servers have a lot more moderators than the game devs could possibly afford, because they can build trust with volunteers.
Good bot AI is the solution. Playing with 99 bots that you can be sure aren't cheating, is better than playing with 99 people you don't know who might be cheating.
The problem is that this costs more than game companies are willing to spend, even when they’re raking in cash hand over fist. As long as the problem isn’t so bad that it’s making players quit, it’s cheaper to employ more automated, less effective strategies. The end goal isn’t player happiness, it’s higher profit margins.
I've not really thought about it so deeply until right exactly now (thanks, all!), but I think doing so might have led me to a very unpopular opinion - I might be prepared to say that this problem can't be solved in an anonymous environment. Unless you have a reputation to ruin (or, say, an xbox account to lose), then being outed as a cheater costs you nothing. Again, this is incompatible with a lot of current multiplayer modes - and most of what I love about PC gaming - but, ultimately, I'd rather be judged by my peers than a rootkit.
One of the games mentioned in this article is Rust. Playing with only people you trust defeats the point because it's a game full of betrayal. At best you'll be able to get a group together once and then destroy your relationships more than Monopoly would.
In some cases there are numerous public servers, which can mitigate the "player availability" problem.
Also, for these online FOSS games the servers are community-owned and moderated. Cheaters, trolls, inappropriate chats are monitored by someone who is interested in, and generally quite knowledgeable about, the game.
In the same graphic style, there's Veloren [2] which is probably the most promising MMORPG-like FOSS game. This isn't Luanti, but a Rust project from scratch (not that I care about the implementation language, though, except when its poor performance becomes a limiting factor). Played some, but voxel graphics without a world fully editable by players makes less sense to me.
I used to play Cube2:Sauerbraten [3], which is an arena shooter. This is an old game, community is slowly shrinking. Was a good casual shooter when I played, but now probably the players that still play are probably accuracy monsters but I never was good getting old doesn't help. Offline vs bot is still fun, and there's a couple of campaigns in the box. Unvanquished [4] is an arena shooter as well with an original concept (played it a little a while ago, should check it out again).
Battle For Wesnoth [5] is top tier as far as turn-based strategy goes.
OOlite [6] is a recreation of the original Elite (Elite:Dangerous is the lastest Opus). Single player game, 600+ mods last time I checked. I was quite into it before moving to Luanti because my interests shifted. OOlite sort of runs at its own pace sometimes, that is the player doesn't have full control over the amount of "action" happening. It is also a difficult game, especially in the early game where it can be unforgiving if not unfair. IIRC, the game comes with a PDF file with some good piece of advice.
I sometimes play some SuperTuxKart [7] single player. Good for casual racing. There's also a football mode which is probably like RocketLeague, but that's not my thing. I used to play StuntRally [8] before the author rebased in on a more recent graphic engine after a long hiatus, which my rig cannot run for some reason. StuntRally has a lot of features - it had online mode before SuperTuxKart did.
0A.D. [9] is and AoE-like with top-tier graphics. Didn't play it a lot because I don't like much games when you mass-kill humans, especially women - call me old school.
In Zero-K [10] it's robots versus robots so it's fine by me. As an RTS, I find it in many ways superior to Starcraft except graphics, partly because it is tuned to handle a massive number of units (no population cap). It is based on the Spring Engine, which features pleasant unit positioning; recently I tried again Warzone2100, but the lack of this feature made it "unplayable" for me. Other games like Zero-K with better graphics are EvolutionRTS and Beyond All Repair, but I didn't play them because my rig can just barely handle ZK already.
[4] https://unvanquished.net/category/news/
[5] https://wesnoth.itch.io/battle-for-wesnoth
[8] https://stuntrally.tuxfamily.org/
[10] https://zero-k.info/
Ultimately the OS should be providing a service that can verify a program is running in a secure environment and hasn't been tampered with. That's something that's useful for things far beyond games. I kind of hope the cheaters win this war for now, to create the incentive for building a better, proper, standardized, cross-platform solution.
Having the kernel itself, actually deny any access... The game devs run a build without debug symbols (not that debugging could work with it on), and run with it... Also, this should severely limit what that process can do in terms of communication outside itself. And maybe a launch warning from the OS... "You are about to launch a sealed application that cannot be observed, do you want to continue? Y/N"
Then all a cheater has to do is run a custom kernel that has an API that responds to that request but then lets another process read/write the memory anyways.
You have to keep in mind something. The cheaters don't give a shit about what they have to do to let a cheat work. It's only the legit players that are like "I don't want anti-cheat to have kernel access". Cheaters will flash a custom BIOS to their motherboard if they have to without a second thought, while legitimate players would be absolutely horrified of the idea of needing a custom BIOS for anti-cheat, and very rightfully so.
Only because the makers of those DMA cards do a bad job hiding themselves. They either use vague, recognisable names, or don't act like the devices they're spoofing.
The moment a cheat developer manages to reprogram an actual SSD (especially a common model), hardware detection like that becomes near impossible.
Proponents of such junk can get lost with their fake justifications of why kernel level anti-cheat malware should be acceptable. They should instead work on server side anti-cheats.
If your garbage collector is grabbing an entire arena of memory and moving it constantly, doesn't that limit a cheat to asking an API to retrieve an object because only the managed memory knows where objects reside at any given moment?
Who needs opaque binary blob kernel modules or whatever for anti-cheat when you can bootstrap a secure boot and remote attestation setup? It's possible for a game server to verify cryptographically that someone is running stock firmware, stock bootloader, stock TCB userspace, a stock game executable, and that no debugger is attached. You don't need cat and mouse BS with executable obfuscation. You don't need inscrutable spyware. You don't need to prohibit VMs. All you need to do is configure your program not to be debuggable, prohibit network MITM (e.g. with certificate pinning), and then use remote attestation to make sure nobody has tampered with the system to make it ignore your anti debugging configuration.
All of the components involved in this trust chain can be open source. There's no spyware involved. No rootkit. No obfuscation. Everything is transparent and above board.
The only downside (besides implementation complexity) is that the remote attestation scheme is incompatible with running custom builds of the components remotely attested. But so what? Doing so isn't a requirement of open source. You can still run custom builds too -- just not at the same time you play your game.
Seems like a fair compromise to me
That said, you might be right that breaking the proprietary software when it runs on custom builds of the FOSS software would be compliant with the license. That is what TiVo did. Would be pretty annoying though, since you couldn't immediately reboot into a new distro kernel security update, since it wouldn't be known by the remote attestation stuff yet.
https://sfconservancy.org/blog/2021/mar/25/install-gplv2/ https://sfconservancy.org/blog/2021/jul/23/tivoization-and-t... https://events19.linuxfoundation.org/wp-content/uploads/2017...
1. No games
2. Inscrutable rootkit
3. Piracy
4. Attestation, i.e. partial Tivoization
Of these, #4 seems least awful and maximally user freedom preserving. Unlike regular Tivoization, also, we're not talking about locking down the whole machine. No need. You basically just need to attest the kernel and some binary signing infrastructure. You can run custom builds of whatever else you want otherwise.
I mean, or you can run a trusted VM, as some others have suggested. Is that really any worse?
Long term damages are self explanatory, it's called a-rootkit
However, there are ways to detect when someone is being an absolute madman with the hacks. We're talking head snapping through walls with 100% accuracy and instantaneous displacement across an entire 30 minute match. These people can simply be banned immediately by hardware/steam ID. We can write basic rules to detect stuff like this. There's no "confidence interval" for speed hacking through a map and awping the entire CT team in 3 seconds. You certainly don't need kernel drivers.
If you can cheat and get away with it, then you'll see streamers do it. That will tank confidence in your game.
It doesn't matter if cheating doesn't make you top the leaderboard. If you have global leaderboards, they will be dominated by cheaters.
I don't think rootkits are excusable but if the solution was simple they would do that.
https://safety.twitch.tv/articles/en_US/Knowledge/Community-...
And how do you actually ensure a good hardware ID that can't be trivially modified?
kernel.modules_disabled = 1
kernel.kexec_load_disabled = 1
The options can be loaded last after the OS is entirely up and running using sysctl. The script that loads these options would have to be disabled and the OS rebooted prior to doing OS updates. Once these options are enabled they can not be disabled without a reboot.If giving a video game sudo or doas or root access, research the game, its developers and publisher exhaustively and ask a magic 8 ball at least 3 times if the game developers can be trusted. Are they within your countries jurisdiction? As others eluded to, consider having a dedicated bare metal system for the games that are suspect. Keep a thumb drive around with the OS image, maybe even a few OS snapshots just in case the game performs dark magic on your system. Consider enabling auditd with custom rules to watch for writes within /boot, /etc, /lib and /usr at very least. Auditd has a built in module that can be enabled to send auditd messages to a remote syslog server. If a game is doing something sneaky or shady, name and shame them.
-- Your friendly neighborhood rootkit developer
But if "serious gamers" really want to go this far to prevent cheating (which will happen anyways as it's not a technical but social problem) then go ahead I guess.
If you want to run it I don't see a problem. Use a dedicated machine. Lets call it a console. Use it exclusively to play online pvp. Don't use it for anything else.
Privacy and security conscious people who use Linux desktops as general purpose computing devices generally don't want anti-cheat systems on their computers. I have no problem with the technology existing for other people. Don't try and force me to use it or I won't support your games/service.
I think a lot of the posturing from game publishers about anti-cheat on linux is really about dissatisfaction with Valve's control of the platform and revenue cut. Competitors aren't prepared to invest in development to build a strong platform like Valve but they are jealous of Valve's income. Nerfing their product on Linux is likely a way of pushing people to other platforms. I don't know what they are smoking because Sony, Apple, Nintendo and Microsoft aren't going to be any better for them.
This has always interested me when it comes to the need for anti-cheat to exist... For instance with wallhacking, the way most FPS style game engines have always been written means the server sends all player location information to all clients, so it's all there in memory if you can get to it.
But what if your server engine instead only sent relevant player location data to each client, it would be more work, the server would have to do occlusion tests for each player pair, but a bounding box and some reasonable spatial partitioning should make that reasonably efficient. To prevent occlusion lag, e.g players not smoothly appearing around corners, the server can make reasonable predictions with some error margins based on a combination of current player velocities and latencies.
I know this is just one part of cheating, but it seems like all the other ones (manipulating input) is a losing battle anyway. I mean ultimately you can't stop people hooking up the input and display to a completely independent device with computer vision.
Shadows and reflections are the hard part. Especially when light can be cast by your own/other players' weapon. It gets even more complicated with ray tracing becoming common.
If you look at the bottom gif in this post you can see what a huge advantage a wall-hack still is: https://technology.riotgames.com/news/demolishing-wallhacks-...
I stopped playing any game that doesn't give me this control when I switched to Linux[0].
If the price of preventing cheating is losing control over my system, its not worth it. There are plenty of games out there that respect's it's players. No need to support ones that wants to be a gatekeeper between your gaming experience and your computer.
[0]: https://www.scottrlarson.com/publications/publication-transi...
I can't agree more with the video linked by the guy in this article claiming it was FUD and misinformation. Author is just flat out wrong to discount the threat.
Any hacker would want a rootkit, any nation state would also want this. Tencent has a convenient nationstate behind them, and a lack of credible history with human rights abuses.
Importantly, you don't need to control and lock down the edge to have an effective anti-cheat. You can do server side checking that is just as effective.
Secondly, unless there are monery awards for winning or something, is it really that big of a problem if people cheat? If you are playing against a stranger, is there a significant difference between playing against someone of the same skill level with cheats, and someone that is much better than you? With a good bracketing system, people who cheat will either end up against other people who cheat, or people who are good enough to play against cheaters. You could also do things like let player opt in to using anti-cheat software, and then only play against other players with anti-cheat software. Or even have arenas specifically for people who use cheating software, for people who think that makes the game more fun.
In between clan matches I would practice by myself on public servers. In my eyes, there were only players that were better than me and players that were worse. They could have better through either skill or through cheating — hax. Either way, I didn’t gain anything from playing against them so I would just try another server.
Eventually you’d find one with a couple of guys from another clan hanging out (“[VD8 Bros] jojo” and “[VD8 Bros] dEvin” etc.), join them, and eventually more people would arrive. Someone really good would inevitably show up with very little social skills (it was mostly text chat) and we’d eventually combine these two priors into deciding they were cheating, and kick them off.
If someone really good showed up and had good, friendly, amusing chat then we’d try and play with them again somehow!
When it comes to the problem of cheating in games, I think the only solution is to bind the gamer's identity to a real life identity that is not trivial to change. That way the cheater only needs to be caught once.
I don't know how many vulnerable drivers the average gamer has installed. I'm sure 'at least some' is a safe assumption. The issue I have with this is that although it may be expected, I don't find it acceptable.
The article presents having this exploitable software on your computer as benign. I don't think that's a particularly healthy attitude, especially in an article oriented towards a more general audience.
The author hasn't had a problem with the anti-cheat software that they like. This is not an argument for why this is a good solution, or why kernel-level anti-cheat is not a security risk. Further, normalising software vulnerabilities weakens whatever case is being made. The more acceptable it is to have broken, exploitable software installed, the more acceptable it will be to ship anti-cheat software that is broken and exploitable.
By the way, on trust: having trust in the vendor is ... inadvisable. I'm not saying it's guaranteed to backfire, but it can only backfire in one direction. The situation in which you trust an entity with goals that are (at best) unaligned with your own is better described as one where they have leverage over you.
It strikes me that, maybe the problem is "anti-cheat" is a misnomer. It's really more like "cheat-detection".
bob1029 says it's sufficient to "make it feel like no one is cheating" and to do that you "don't need kernel drivers".
Well, I'm glad for bob1029 that they are so blind that they don't "feel" like the game is being hacked in games that don't have anti-cheat (or some hypothetical anti-cheat that doesn't use a rootkit, I wish bob1029 would elaborate because I haven't seen one yet!). But many of us have higher perception levels and wouldn't be able to ignore the cheaters.
since EAAC was mentioned, it is comical how Battlefield 6 open beta was already swarmed with cheaters, despite using a "new" version of the anticheat[1]. it gets very annoying when a kernel anticheat is just slapped on as a marketing point about how 'secure' the environment is.
so much can be done at the server side instead, but devs try their best to minimize costs at their end. such as GTA online using P2P connections, and only including partial protections to avoid getting IP addresses of other players in the session only a couple years back.
personally, like tour de france, i rather accept the reality that there will be cheaters regardless than to face issues like this. there are other annoyances like blocking virutalization, etc. which still makes it hard to main linux and run windows for select things.
[1] https://kotaku.com/battlefield-6-open-beta-cheating-cheaters...
ai_critic•5mo ago