You can be clever and build a random memory allocator. You can get clever and watch for frozen struct members after a known set operation, what you can’t do is prevent all cheating. There’s device layer, driver layer, MITM, emulation, and even now AI mouse control.
The only thing you can do is watch for it and send the ban hammer. Valve has a wonderful write up about client-side prediction recording so as to verify killcam shots were indeed, kill shots, and not aim bots (but this method is great for seeing those in action as well!)
Sure, but you still have to make a serious attempt or the experience will be terrible for any non-cheaters. Or you just make your game bad enough that no one cares. That's an option too.
Yes they do. They don't stop all cheating, but they raise the barrier to entry which means fewer cheaters.
I don't like arguments that sound like "well you can't stop all crime so you may as well not even try"
If you don’t need real-time packets and can deal with the old school architecture of pulses, there’s things you can do on the network to ensure security.
You do this too on real-time UDP it’s just a bit trickier. Prediction and analysis pattern discovery is really the only options thus far.
But I could be blowing smoke and know nothing about the layers of kernel integration these malware have developed.
Because of that, usermode anti-cheat is definitely far from useless in Wine; it can still function insofar as it tries to monitor the process space of the game itself. It can't really do a ton to ensure the integrity of Wine directly, but usermode anti-cheat running on Windows can't do much to ensure the integrity of Windows directly either, without going the route of requiring attestation. In fact, for the latest anti-cheat software I've ever attempted to mess with, which to be fair was circa 2016, it is still possible to work around anti-cheat mechanisms by detouring the Windows API calls themselves, to the extent that you can. (If you be somewhat clever it can be pretty useful, and has the bonus of being much harder to detect obviously.)
The limitation is obviously that inside Wine you can't see most Linux resources directly using the same APIs, so you can't go and try to find cheat software directly. But let's be honest, that approach isn't really terribly relevant anymore since it is a horribly fragile and limited way to detect cheats.
For more invasive anti-cheat software, well. We'll see. But just because Windows is closed source hasn't stopped people from patching Windows itself or writing their own kernel drivers. If that really was a significant barrier, Secure Boot and TPM-based attestation wouldn't be on the radar for anti-cheat vendors. Valve however doesn't seem keen to support this approach at all on its hardware, and if that forces anti-cheat vendors to go another way it is probably all the better. I think the secure boot approach has a limited shelf life anyways.
I don't hate the lack of cheating compared to older Battlefield games if I am going to be honest.
I'm curious, does anyone know how exactly they check for this? How was it actually made unspoofable?
Any player responding to ingame events (enemy appeared) with sub 80ms reaction times consistently should be an automatic ban.
Is it ever? No.
Given good enough data a good team of data scientists would be able to make a great set of rules using statistical analysis that effectively ban anyone playing at a level beyond human.
In the chess of fps that is cs, even a pro will make the wrong read based on their teams limited info of the game state. A random wallhacker making perfect reads with limited info over several matches IS flaggable...if you can capture and process the data and compare it to (mostly) legitimate player data.
Kernel level? The SOTA cheats use custom hardware that uses DMA to spy on the game state. There are now also purely external cheating devices that use video capture and mouse emulation to fully simulate a human.
Can you define what "reacting" means exactly in a shooter, that you can spot it in game data reliable to apply automatic bans?
Afaik there have been wallhacks and aimbots since the open beta.
I think the biggest thing is that the anticheat devs are using Microsoft's CA to check if your efi executable was signed by Microsoft. If that was the case then its all good and you are allowed to play the game you paid money for.
I haven't tested a self-signed secure boot for battlefield 6, I know some games literally do not care if you signed your own stuff, only if secure boot is actually enabled
edit: Someone else confirmed they require TPM to be enabled too meaning yeah, they are using remote attestation to verify the validity of the signed binary
There are two additional concepts built upon the TPM and Secure Boot that matter here, known as Trusted Boot [1,2] and Remote Attestation [2].
Importantly, every TPM has an Endorsement Key (EK) built into it, which is really an asymmetric keypair, and the private key cannot be extracted through any normal means. The EK is accompanied by a certificate, which is signed by the hardware manufacturer and identifies the TPM model. The major manufacturers publish their certificate authorities [3].
So you can get the TPM to digitally sign a difficult-to-forge, time-stamped statement using its EK. Providing this statement along with the TPM's EK certificate on demand attests to a remote party that the system currently has a valid TPM and that the boot process wasn't tampered with.
Common spoofing techniques get defeated in various ways:
- Stale attestations will fail a simple timestamp check
- Forged attestations will have invalid signatures
- A fake TPM will not have a valid EK certificate, or its EK certificate will be self-signed, or its EK certificate will not have a widely recognized issuer
- Trusted Boot will generally expose the presence of obvious defeat mechanisms like virtualization and unsigned drivers
- DMA attacks can be thwarted by an IOMMU, the existence/lack of which can be exposed through Trusted Boot data as well
- If someone manages to extract an EK but shares it online, it will be obvious when it gets reused by multiple users
- If someone finds a vulnerability in a TPM model and shares it online, the model can be blacklisted
Even so, I can still think of an avenue of attack, which is to proxy RA requests to a different, uncompromised system's TPM. The tricky parts are figuring out how to intercept these requests on the compromised system, how to obtain them from the uncompromised system without running any suspicious software, and knowing what other details to spoof that might be obtained through other means but which would contradict the TPM's statement.
[1]: https://learn.microsoft.com/en-us/windows/security/operating...
[2]: https://docs.system-transparency.org/st-1.3.0/docs/selected-...
[3]: https://en.wikipedia.org/wiki/Trusted_Platform_Module#Endors...
Or perhaps the 0ms-80ms distribution of mouse movement matches the >80ms mouse movement distribution within some bounds. I'm thinking KL divergence between the two.
The Kolmogorov-Smirnov Test for two-dimensional data?
There's a lot of interesting possible approaches that can be tuned for arbitrary sensitivity and specificity.
And the SOTA anti-cheats now use IOMMU shenanigans to keep DMA devices from seeing the game state. The arms race continues.
I feel like this is the same as saying "seatbelts don't prevent car accident deaths at all", just because people still die in car accidents while wearing seat belts.
Just because something isn't 100% effective doesn't mean it doesn't provide value. There is a LOT less cheating in games with good anti-cheat, and it is much more pleasant to play those games because of it. There is a benefit to making it harder to cheat, even if it doesn't make it impossible.
It might just be the game too - I do think the auto aim is a bit high because I feel like I make aimbot like shots from time to time. And depending on the mode BF6 _wall hacks for you_ if there are players in an area outside of where they are supposed to be defending. I was pretty surprised to see a little red floating person overlay behind a wall.
The vast majority of cheaters in most games are not sophisticated users. Ease of access and use is the biggest issue.
It's really much more nuanced than that. Counter-Strike 2 has already implemented this type of feature, and it immediately got some clear false positives. There are many situations where high level players play in a predictive, rather than reactive, manner. Pre-firing is a common strategy that will always look indistinguishable from an inhuman reaction time. So is tap-firing at an angle that you anticipate a an opponent may peek you from.
The qualifier "good" for "good anti-cheat" is doing a lot of heavy lifting. What was once good enough is now laughably inadequate. We have followed that thread to its logical conclusion with the introduction of kernel-level anti-cheat. That has proven to be insufficient, unsurprisingly, and, given enough time, the act of bypassing kernel-level anti-cheat will become commoditized just like every other anti-cheat prior.
Not only does this present a huge security risk, it can break existing software and the OS itself. These anti-cheats tend not to be written by people intimately familiar with Windows kernel development, and they cause regressions in existing software which the users then blame on Windows.
That's why Microsoft did Windows Defender and tried to kill off 3rd party anti-virus.
I would beg to differ. In the US at least, there does seem to be a hidden arms race between safety features and the environment (in the form of car size growth)
VAC is still a laughing joke in CS2, literally unplayable when you reached 15k+. Riot Vanguard is extremely invasive, but it's leaps and bounds a head of VAC.
And Valve's banning waves long after the fact doesn't improve the players experience at all. CS2 is F2P, alts are easy to get, cheating happens in alost every single high-ranked game, players experience is shit.
Anti-cheat makers doesn't need to eliminate cheating completely, they just need to capture enough cheating (and ban unpredictably) that average people are mostly discouraged. As long as cheat-creators have to scurry around in secrecy and guard their implementations until the implementation is caught, the "good" cheats will never be a commodity on mainstream well-funded games with good anti-cheat.
Cheat-creators have to do the hard hacking and put their livelihoods on the line, they make kids pay up for that.
And being real, the zero-day cheats are closely guarded and trickled out and sold for high prices as other cheats get found out, so for AAA games, the good cheats are priced out of comfort zone and anyone who attempts the lazy/cheap cheats is banned pretty quickly. A significant portion of the dishonest becomes honest through laziness or self-preservation. Only a select few are truly committed to dishonesty enough to put money and their accounts on the line.
Same way there are fewer murderers and thieves than there are non-murderers and non-thieves (at least in western countries).
As always, one of the most difficult parts is getting good features and data. In this case one difficulty is measuring and defining the reaction time to begin with.
In Counter Strike you rely on footsteps to guess if someone is around the corner and start shooting when they come close. For far away targets, lots of people camp at specifc spots and often shoot without directly sighting someone if they anticipate someone crossing - the hit rate may be low but it's a low cost thing to do. Then you have people not hiding too well and showing a toe. Or someone pinpointing the position of an enemy based on information from another player. So the question is, what is the starting point for you to measure the reaction?
Now let's say you successfully measured the reaction time and applied a threshold of 80ms. Bot runners will adapt and sandbag their reaction time, or introduce motions to make it harder to measure mouse movements, and the value of your model now is less than the electricity needed to run it.
So with your proposal to solve the reaction time problem with KL divergence. Congratulations, you just solved a trivial statistics problem to create very little business value.
A human can't really, which is why you need to bring in ML. Feed it enough game states of legit players vs known cheaters, and it will be able to find patterns.
A properly designed game should not send the position of ennemies out of view
This is generally the anti-cheat problem. Certain genres have gameplay that cannot be implemented without trusting the client at least some of the time.
You arent eliminating cheaters, that's impossible, you are limiting their impact.
A suitable game engine would have knowledge of when a shadow, player, grenade, noise, or other reactable event occurs for a given client.
Especially if games arent processed in real time but processed later based on a likelihood of cheating drawn from other stats.
Ive played at the pro level. Nobody prefires with perfect robotic consistency.
I dont care if it takes 50 matches of data for the statistical model to call it inhuman.
Valve has enough data that they could easily make the threshold for a ban something like '10x more consistent at pre-firing than any pro has ever been' with a high confidence borne over many engagements in many matches.
Having some anti-cheat is better than no anti-cheat but my point is it’s not a shield. It’s a cheese grater.
Then all you need to do to fool this anticheat is to add some randomness to the cheat.
My world is pretty fine, as I don't play games on servers, without active admin/mods that kick and ban people who obviously cheat.
ML solutions can maybe help here, but I believe they can reliable detect cheats, without banning also lucky or skilled players, once I see it.
This is one of the cases where ML methods seem appropriate.
You've made them the same as the best players. Otherwise we're banning the best players.