Because of that, usermode anti-cheat is definitely far from useless in Wine; it can still function insofar as it tries to monitor the process space of the game itself. It can't really do a ton to ensure the integrity of Wine directly, but usermode anti-cheat running on Windows can't do much to ensure the integrity of Windows directly either, without going the route of requiring attestation. In fact, for the latest anti-cheat software I've ever attempted to mess with, which to be fair was circa 2016, it is still possible to work around anti-cheat mechanisms by detouring the Windows API calls themselves, to the extent that you can. (If you be somewhat clever it can be pretty useful, and has the bonus of being much harder to detect obviously.)
The limitation is obviously that inside Wine you can't see most Linux resources directly using the same APIs, so you can't go and try to find cheat software directly. But let's be honest, that approach isn't really terribly relevant anymore since it is a horribly fragile and limited way to detect cheats.
For more invasive anti-cheat software, well. We'll see. But just because Windows is closed source hasn't stopped people from patching Windows itself or writing their own kernel drivers. If that really was a significant barrier, Secure Boot and TPM-based attestation wouldn't be on the radar for anti-cheat vendors. Valve however doesn't seem keen to support this approach at all on its hardware, and if that forces anti-cheat vendors to go another way it is probably all the better. I think the secure boot approach has a limited shelf life anyways.
I don't hate the lack of cheating compared to older Battlefield games if I am going to be honest.
I'm curious, does anyone know how exactly they check for this? How was it actually made unspoofable?
Any player responding to ingame events (enemy appeared) with sub 80ms reaction times consistently should be an automatic ban.
Is it ever? No.
Given good enough data a good team of data scientists would be able to make a great set of rules using statistical analysis that effectively ban anyone playing at a level beyond human.
In the chess of fps that is cs, even a pro will make the wrong read based on their teams limited info of the game state. A random wallhacker making perfect reads with limited info over several matches IS flaggable...if you can capture and process the data and compare it to (mostly) legitimate player data.
Can you define what "reacting" means exactly in a shooter, that you can spot it in game data reliable to apply automatic bans?
Afaik there have been wallhacks and aimbots since the open beta.
I think the biggest thing is that the anticheat devs are using Microsoft's CA to check if your efi executable was signed by Microsoft. If that was the case then its all good and you are allowed to play the game you paid money for.
I haven't tested a self-signed secure boot for battlefield 6, I know some games literally do not care if you signed your own stuff, only if secure boot is actually enabled
edit: Someone else confirmed they require TPM to be enabled too meaning yeah, they are using remote attestation to verify the validity of the signed binary
There are two additional concepts built upon the TPM and Secure Boot that matter here, known as Trusted Boot [1,2] and Remote Attestation [2].
Importantly, every TPM has an Endorsement Key (EK) built into it, which is really an asymmetric keypair, and the private key cannot be extracted through any normal means. The EK is accompanied by a certificate, which is signed by the hardware manufacturer and identifies the TPM model. The major manufacturers publish their certificate authorities [3].
So you can get the TPM to digitally sign a difficult-to-forge, time-stamped statement using its EK. Providing this statement along with the TPM's EK certificate on demand attests to a remote party that the system currently has a valid TPM and that the boot process wasn't tampered with.
Common spoofing techniques get defeated in various ways:
- Stale attestations will fail a simple timestamp check
- Forged attestations will have invalid signatures
- A fake TPM will not have a valid EK certificate, or its EK certificate will be self-signed, or its EK certificate will not have a widely recognized issuer
- Trusted Boot will generally expose the presence of obvious defeat mechanisms like virtualization and unsigned drivers
- DMA attacks can be thwarted by an IOMMU, the existence/lack of which can be exposed through Trusted Boot data as well
- If someone manages to extract an EK but shares it online, it will be obvious when it gets reused by multiple users
- If someone finds a vulnerability in a TPM model and shares it online, the model can be blacklisted
Even so, I can still think of an avenue of attack, which is to proxy RA requests to a different, uncompromised system's TPM. The tricky parts are figuring out how to intercept these requests on the compromised system, how to obtain them from the uncompromised system without running any suspicious software, and knowing what other details to spoof that might be obtained through other means but which would contradict the TPM's statement.
[1]: https://learn.microsoft.com/en-us/windows/security/operating...
[2]: https://docs.system-transparency.org/st-1.3.0/docs/selected-...
[3]: https://en.wikipedia.org/wiki/Trusted_Platform_Module#Endors...
Or perhaps the 0ms-80ms distribution of mouse movement matches the >80ms mouse movement distribution within some bounds. I'm thinking KL divergence between the two.
The Kolmogorov-Smirnov Test for two-dimensional data?
There's a lot of interesting possible approaches that can be tuned for arbitrary sensitivity and specificity.
It might just be the game too - I do think the auto aim is a bit high because I feel like I make aimbot like shots from time to time. And depending on the mode BF6 _wall hacks for you_ if there are players in an area outside of where they are supposed to be defending. I was pretty surprised to see a little red floating person overlay behind a wall.
It's really much more nuanced than that. Counter-Strike 2 has already implemented this type of feature, and it immediately got some clear false positives. There are many situations where high level players play in a predictive, rather than reactive, manner. Pre-firing is a common strategy that will always look indistinguishable from an inhuman reaction time. So is tap-firing at an angle that you anticipate a an opponent may peek you from.
Not only does this present a huge security risk, it can break existing software and the OS itself. These anti-cheats tend not to be written by people intimately familiar with Windows kernel development, and they cause regressions in existing software which the users then blame on Windows.
That's why Microsoft did Windows Defender and tried to kill off 3rd party anti-virus.
As always, one of the most difficult parts is getting good features and data. In this case one difficulty is measuring and defining the reaction time to begin with.
In Counter Strike you rely on footsteps to guess if someone is around the corner and start shooting when they come close. For far away targets, lots of people camp at specifc spots and often shoot without directly sighting someone if they anticipate someone crossing - the hit rate may be low but it's a low cost thing to do. Then you have people not hiding too well and showing a toe. Or someone pinpointing the position of an enemy based on information from another player. So the question is, what is the starting point for you to measure the reaction?
Now let's say you successfully measured the reaction time and applied a threshold of 80ms. Bot runners will adapt and sandbag their reaction time, or introduce motions to make it harder to measure mouse movements, and the value of your model now is less than the electricity needed to run it.
So with your proposal to solve the reaction time problem with KL divergence. Congratulations, you just solved a trivial statistics problem to create very little business value.
A human can't really, which is why you need to bring in ML. Feed it enough game states of legit players vs known cheaters, and it will be able to find patterns.
You arent eliminating cheaters, that's impossible, you are limiting their impact.
A suitable game engine would have knowledge of when a shadow, player, grenade, noise, or other reactable event occurs for a given client.
Especially if games arent processed in real time but processed later based on a likelihood of cheating drawn from other stats.
Ive played at the pro level. Nobody prefires with perfect robotic consistency.
I dont care if it takes 50 matches of data for the statistical model to call it inhuman.
Valve has enough data that they could easily make the threshold for a ban something like '10x more consistent at pre-firing than any pro has ever been' with a high confidence borne over many engagements in many matches.
Then all you need to do to fool this anticheat is to add some randomness to the cheat.
My world is pretty fine, as I don't play games on servers, without active admin/mods that kick and ban people who obviously cheat.
ML solutions can maybe help here, but I believe they can reliable detect cheats, without banning also lucky or skilled players, once I see it.
This is one of the cases where ML methods seem appropriate.
You've made them the same as the best players. Otherwise we're banning the best players.