Although this is true for most games it is worth noting that it isn't universally true. Usermode anti-cheat does sometimes work verbatim in Wine, and some anti-cheat software has Proton support, though not all developers elect to enable it.
Looking at you Rust.
Edit:
And the rest of you. If even Microsoft's Masterchief Collection supports it, I Don't understand why everyone else does not.
Then I saw the arewe…yet url and thought you meant Rust the programming language
Then I visited the arewe…yet link and realized it was the Rust game you meant after all
It's because the Linux versions of those anti-cheats are significantly weaker than their Windows counterparts.
More tricky for the sibling comment with Rust, where either one could be valid.
You can be clever and build a random memory allocator. You can get clever and watch for frozen struct members after a known set operation, what you can’t do is prevent all cheating. There’s device layer, driver layer, MITM, emulation, and even now AI mouse control.
The only thing you can do is watch for it and send the ban hammer. Valve has a wonderful write up about client-side prediction recording so as to verify killcam shots were indeed, kill shots, and not aim bots (but this method is great for seeing those in action as well!)
Sure, but you still have to make a serious attempt or the experience will be terrible for any non-cheaters. Or you just make your game bad enough that no one cares. That's an option too.
Yes they do. They don't stop all cheating, but they raise the barrier to entry which means fewer cheaters.
I don't like arguments that sound like "well you can't stop all crime so you may as well not even try"
If you don’t need real-time packets and can deal with the old school architecture of pulses, there’s things you can do on the network to ensure security.
You do this too on real-time UDP it’s just a bit trickier. Prediction and analysis pattern discovery is really the only options thus far.
But I could be blowing smoke and know nothing about the layers of kernel integration these malware have developed.
Because of that, usermode anti-cheat is definitely far from useless in Wine; it can still function insofar as it tries to monitor the process space of the game itself. It can't really do a ton to ensure the integrity of Wine directly, but usermode anti-cheat running on Windows can't do much to ensure the integrity of Windows directly either, without going the route of requiring attestation. In fact, for the latest anti-cheat software I've ever attempted to mess with, which to be fair was circa 2016, it is still possible to work around anti-cheat mechanisms by detouring the Windows API calls themselves, to the extent that you can. (If you be somewhat clever it can be pretty useful, and has the bonus of being much harder to detect obviously.)
The limitation is obviously that inside Wine you can't see most Linux resources directly using the same APIs, so you can't go and try to find cheat software directly. But let's be honest, that approach isn't really terribly relevant anymore since it is a horribly fragile and limited way to detect cheats.
For more invasive anti-cheat software, well. We'll see. But just because Windows is closed source hasn't stopped people from patching Windows itself or writing their own kernel drivers. If that really was a significant barrier, Secure Boot and TPM-based attestation wouldn't be on the radar for anti-cheat vendors. Valve however doesn't seem keen to support this approach at all on its hardware, and if that forces anti-cheat vendors to go another way it is probably all the better. I think the secure boot approach has a limited shelf life anyways.
I don't hate the lack of cheating compared to older Battlefield games if I am going to be honest.
I'm curious, does anyone know how exactly they check for this? How was it actually made unspoofable?
Any player responding to ingame events (enemy appeared) with sub 80ms reaction times consistently should be an automatic ban.
Is it ever? No.
Given good enough data a good team of data scientists would be able to make a great set of rules using statistical analysis that effectively ban anyone playing at a level beyond human.
In the chess of fps that is cs, even a pro will make the wrong read based on their teams limited info of the game state. A random wallhacker making perfect reads with limited info over several matches IS flaggable...if you can capture and process the data and compare it to (mostly) legitimate player data.
Kernel level? The SOTA cheats use custom hardware that uses DMA to spy on the game state. There are now also purely external cheating devices that use video capture and mouse emulation to fully simulate a human.
Can you define what "reacting" means exactly in a shooter, that you can spot it in game data reliable to apply automatic bans?
Afaik there have been wallhacks and aimbots since the open beta.
I think the biggest thing is that the anticheat devs are using Microsoft's CA to check if your efi executable was signed by Microsoft. If that was the case then its all good and you are allowed to play the game you paid money for.
I haven't tested a self-signed secure boot for battlefield 6, I know some games literally do not care if you signed your own stuff, only if secure boot is actually enabled
edit: Someone else confirmed they require TPM to be enabled too meaning yeah, they are using remote attestation to verify the validity of the signed binary
There are two additional concepts built upon the TPM and Secure Boot that matter here, known as Trusted Boot [1,2] and Remote Attestation [2].
Importantly, every TPM has an Endorsement Key (EK) built into it, which is really an asymmetric keypair, and the private key cannot be extracted through any normal means. The EK is accompanied by a certificate, which is signed by the hardware manufacturer and identifies the TPM model. The major manufacturers publish their certificate authorities [3].
So you can get the TPM to digitally sign a difficult-to-forge, time-stamped statement using its EK. Providing this statement along with the TPM's EK certificate on demand attests to a remote party that the system currently has a valid TPM and that the boot process wasn't tampered with.
Common spoofing techniques get defeated in various ways:
- Stale attestations will fail a simple timestamp check
- Forged attestations will have invalid signatures
- A fake TPM will not have a valid EK certificate, or its EK certificate will be self-signed, or its EK certificate will not have a widely recognized issuer
- Trusted Boot will generally expose the presence of obvious defeat mechanisms like virtualization and unsigned drivers
- DMA attacks can be thwarted by an IOMMU, the existence/lack of which can be exposed through Trusted Boot data as well
- If someone manages to extract an EK but shares it online, it will be obvious when it gets reused by multiple users
- If someone finds a vulnerability in a TPM model and shares it online, the model can be blacklisted
Even so, I can still think of an avenue of attack, which is to proxy RA requests to a different, uncompromised system's TPM. The tricky parts are figuring out how to intercept these requests on the compromised system, how to obtain them from the uncompromised system without running any suspicious software, and knowing what other details to spoof that might be obtained through other means but which would contradict the TPM's statement.
[1]: https://learn.microsoft.com/en-us/windows/security/operating...
[2]: https://docs.system-transparency.org/st-1.3.0/docs/selected-...
[3]: https://en.wikipedia.org/wiki/Trusted_Platform_Module#Endors...
FACEIT is significantly more effective.
Anti cheats are as much a marketing ploy as they're actual anti cheats. People believe everyone is cheating so it must be true. People believe nobody bypasses the FACEIT anti cheat so it must be true. Neither of those are correct.
Riot revels in this by marketing their anti cheat, but there are always going to be cheaters. And sooner or later we will have vulnerabilities in their kernel spyware. I much rather face a few cheaters here and there (which is not as common as people make it to be on high trust factor).
You think tournament organizers or pro players know the first thing about anti cheats? They buy the marketing just like everybody else.
Or perhaps the 0ms-80ms distribution of mouse movement matches the >80ms mouse movement distribution within some bounds. I'm thinking KL divergence between the two.
The Kolmogorov-Smirnov Test for two-dimensional data?
There's a lot of interesting possible approaches that can be tuned for arbitrary sensitivity and specificity.
And the SOTA anti-cheats now use IOMMU shenanigans to keep DMA devices from seeing the game state. The arms race continues.
I feel like this is the same as saying "seatbelts don't prevent car accident deaths at all", just because people still die in car accidents while wearing seat belts.
Just because something isn't 100% effective doesn't mean it doesn't provide value. There is a LOT less cheating in games with good anti-cheat, and it is much more pleasant to play those games because of it. There is a benefit to making it harder to cheat, even if it doesn't make it impossible.
It might just be the game too - I do think the auto aim is a bit high because I feel like I make aimbot like shots from time to time. And depending on the mode BF6 _wall hacks for you_ if there are players in an area outside of where they are supposed to be defending. I was pretty surprised to see a little red floating person overlay behind a wall.
I’ve seen so many players saying “look you can own my entire pc just please eliminate the cheating.”
It would be great to see more of a web of trust thing instead of invasive anti cheat. That would make it harder for people to get into the games in the first place though so I don’t know if developers would really want to go that way.
The vast majority of cheaters in most games are not sophisticated users. Ease of access and use is the biggest issue.
It's really much more nuanced than that. Counter-Strike 2 has already implemented this type of feature, and it immediately got some clear false positives. There are many situations where high level players play in a predictive, rather than reactive, manner. Pre-firing is a common strategy that will always look indistinguishable from an inhuman reaction time. So is tap-firing at an angle that you anticipate a an opponent may peek you from.
The qualifier "good" for "good anti-cheat" is doing a lot of heavy lifting. What was once good enough is now laughably inadequate. We have followed that thread to its logical conclusion with the introduction of kernel-level anti-cheat. That has proven to be insufficient, unsurprisingly, and, given enough time, the act of bypassing kernel-level anti-cheat will become commoditized just like every other anti-cheat prior.
Not only does this present a huge security risk, it can break existing software and the OS itself. These anti-cheats tend not to be written by people intimately familiar with Windows kernel development, and they cause regressions in existing software which the users then blame on Windows.
That's why Microsoft did Windows Defender and tried to kill off 3rd party anti-virus.
I would beg to differ. In the US at least, there does seem to be a hidden arms race between safety features and the environment (in the form of car size growth)
Plus, there are some really simple side channel exploits that your whitelisted app have vulns that you can grab a full-access handle to your anticheat protected game, rendering those kernel level protection useless, despite it also means external cheat and not full blown internal cheat, since interal cheat carrys way more risk, but also way more rewardings, such as fine-level game modification, or even that some 0days are found on the game network stack so maybe there is a buffer overflow or double-free, making sending malicious payload to other players and doing RCEs possible. (It is still possible to do internal cheat injection from external cheat, using techniques such as manual mapping/reflective DLL injecction, that effectively replicates PE loading mechanism, and then you hijack some execution routine at some point to call your injected-allocated code, either through creating a new thread, hijacking existing thread context, APC callback hijack or even exception vector register hijacking, and in general, hijack any kinds of control flow, but anticheat software actively look for those "illegal" stuff in memory and triggers red flag and bans you immediately)
From what I've seen over the years, the biggest problem for anticheat in Linux is that there is too much liberty and freedom, but the anticheat/antivirus is an antithesis to liberty and freedom. This is because anticheat wants to use strong protection mechanism borrowed from antivirus technique to provide a fair gaming experience, at the cost of lowering framerates and increasing processing power, and sometimes BSOD.
And I know it is very cliche at this point, but I always love to quote Benjamin Franklin: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety". I therefore only keep Windows to play games lately, and switched to a new laptop, installed CachyOS on it, and transfered all my development stuff over to the laptop. You can basically say I have my main PC at home as a more "free" xbox.
Speaking of xbox, they have even more strict control over the games, that one of the anticheat technique, HVCI (hypervisor-protected code integrity) or VBS, is straight out of the tech from xbox, that it uses Hyper-V to isolate game process and main OS, making xbox impossible to jailbreak. In Windows it prevents some degree of DMA attack by leveragng IOMMU and encrypting the memory content beforehand to makd sure it is not visible to external devices over the PCIe bus.
That said, in other words, it is ultimately all about the tradeoff between freedom and control.
A similar concept, trusted computing: https://en.wikipedia.org/wiki/Trusted_Computing
VAC is still a laughing joke in CS2, literally unplayable when you reached 15k+. Riot Vanguard is extremely invasive, but it's leaps and bounds a head of VAC.
And Valve's banning waves long after the fact doesn't improve the players experience at all. CS2 is F2P, alts are easy to get, cheating happens in alost every single high-ranked game, players experience is shit.
Anti-cheat makers doesn't need to eliminate cheating completely, they just need to capture enough cheating (and ban unpredictably) that average people are mostly discouraged. As long as cheat-creators have to scurry around in secrecy and guard their implementations until the implementation is caught, the "good" cheats will never be a commodity on mainstream well-funded games with good anti-cheat.
Cheat-creators have to do the hard hacking and put their livelihoods on the line, they make kids pay up for that.
And being real, the zero-day cheats are closely guarded and trickled out and sold for high prices as other cheats get found out, so for AAA games, the good cheats are priced out of comfort zone and anyone who attempts the lazy/cheap cheats is banned pretty quickly. A significant portion of the dishonest becomes honest through laziness or self-preservation. Only a select few are truly committed to dishonesty enough to put money and their accounts on the line.
Same way there are fewer murderers and thieves than there are non-murderers and non-thieves (at least in western countries).
As always, one of the most difficult parts is getting good features and data. In this case one difficulty is measuring and defining the reaction time to begin with.
In Counter Strike you rely on footsteps to guess if someone is around the corner and start shooting when they come close. For far away targets, lots of people camp at specifc spots and often shoot without directly sighting someone if they anticipate someone crossing - the hit rate may be low but it's a low cost thing to do. Then you have people not hiding too well and showing a toe. Or someone pinpointing the position of an enemy based on information from another player. So the question is, what is the starting point for you to measure the reaction?
Now let's say you successfully measured the reaction time and applied a threshold of 80ms. Bot runners will adapt and sandbag their reaction time, or introduce motions to make it harder to measure mouse movements, and the value of your model now is less than the electricity needed to run it.
So with your proposal to solve the reaction time problem with KL divergence. Congratulations, you just solved a trivial statistics problem to create very little business value.
A human can't really, which is why you need to bring in ML. Feed it enough game states of legit players vs known cheaters, and it will be able to find patterns.
A properly designed game should not send the position of ennemies out of view
This is generally the anti-cheat problem. Certain genres have gameplay that cannot be implemented without trusting the client at least some of the time.
The only non-generic word you see in the crash message is "SQLite".
You look it up, find SQLite, and you bother the developers for help.
The problem is as old as labels.
You arent eliminating cheaters, that's impossible, you are limiting their impact.
A suitable game engine would have knowledge of when a shadow, player, grenade, noise, or other reactable event occurs for a given client.
Especially if games arent processed in real time but processed later based on a likelihood of cheating drawn from other stats.
Ive played at the pro level. Nobody prefires with perfect robotic consistency.
I dont care if it takes 50 matches of data for the statistical model to call it inhuman.
Valve has enough data that they could easily make the threshold for a ban something like '10x more consistent at pre-firing than any pro has ever been' with a high confidence borne over many engagements in many matches.
I'm not sure how I feel about that, but it's what I think will happen.
I predict that hacker news in particular will dislike using facial recognition technology to allow for permanent ban-hammers, but frankly this neatly solves 95% of the problem in a simple, intuitive way. Frankly, the approach has the capacity to revitalize entire genres, and theres lots of cool stuff you could potentially implement when you can guarantee that one account = one person.
Having some anti-cheat is better than no anti-cheat but my point is it’s not a shield. It’s a cheese grater.
Then all you need to do to fool this anticheat is to add some randomness to the cheat.
My world is pretty fine, as I don't play games on servers, without active admin/mods that kick and ban people who obviously cheat.
ML solutions can maybe help here, but I believe they can reliable detect cheats, without banning also lucky or skilled players, once I see it.
This is one of the cases where ML methods seem appropriate.
Anyone that's not dumb will know (maybe after the heat of the moment) why they lost, but the vast majority of people will blame anything they can instead. Teammates, lag, the developers, etc. Cheating is merely one of these excuses.
> I’ve seen so many players saying “look you can own my entire pc just please eliminate the cheating.”
This entire idea is so dumb it makes my head hurt. You can't eliminate bad actors no matter how hard you try. It's impossible in the real world.
All these "if only we could prevent X with more surveillance/control" ideas go up in flames as soon as reality hits. Even if a single person bypasses it, we can question everything. Then all we're left with are these surveillance systems that are then converted into pure data exfiltration to sell it all to the highest bidder (assuming they weren't doing this already).
I applaud Valve for not going down the easy route of creating spyware and selling it as "protection".
You've made them the same as the best players. Otherwise we're banning the best players.