Secure attestation about device state requires something akin to Secure Boot (with a TPM), and in the context of a BYOD environment precludes the device owner having full control of their own hardware. Obviously this is not an issue if the organization only permits access to its services from devices it owns, but no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.
I use LineageOS on my phone, and do not have Google Play Services installed. The phone only meaningfully interacts with a very few and most basic Google services, like an HTTP server for captive portal detection on Wifi networks, an NTP server for setting the clock, etc. All other "high-level" services that I am aware of, like Mail, Calendaring, Contacts, Phone, Instant Messaging, etc., are either provided by other parties that I feel more comfortable with, or that I actually host myself.
Now let's assume that I would want or have to do online/mobile banking on my phone - that will generally only work with the proprietary app my bank provides me with. Even if I choose to install their unmodified APK, (any lack of) SafetyNet will not attest my LineageOS-powered phone as "kosher" (or "safe and secure", or "healthy", or whatever Google prefers calling it these days), and might refuse to work. As a consequence, I'm effectively unable to interact via the remote service provided by my bank, because they believe they've got to protect me from the OS/firmware build that I personally chose to use.
Sure, "just access their website via the browser, and do your banking on their website instead!", you might say, and you'd be right for now. But with remote attestation broadly available, what prevents anyone from also using that for the browser app on my phone, esp. since browser security is deemed so critical these days? I happen to use Firefox from F-Droid, and I doubt any hypothetical future SafetyNet attestation routine will have it pass with the same flying colors that Google's own Chrome from the Play Store would. I'm also certain that "Honest c0l0's Own Build of Firefox for Android" wouldn't get the SafetyNet seal of approval either, and with that I'd be effectively shut off from interacting with my bank account from my mobile phone altogether. The only option I'd have is to revert back to a "trusted", "healthy" phone with a manufacturer-provided bootloader, firmware image, and the mandatory selection of factory-installed, non-removable crapware that I am never going to use and/or (personally) trust that's probably exfiltrating my personal data to some unknown third parties, sanctified by some few hundreds of pages of EULA and "Privacy" Policy.
With app stores on all mainstream and commercially successful desktop OSes, the recent Windows 11 "security and safety"-related "advances" Microsoft introduced by (as of today, apparently still mildly) requiring TPM support, and supplying manufacturers with "secure enclave"-style add-on chips of their own design ("Pluton", see https://www.techradar.com/news/microsofts-new-security-chip-...), I can see this happening to desktop computing as well. Then I can probably still compile all the software I want on my admittedly fringe GNU/Linux system (or let the Debian project compile it for me), but it won't matter much - because any interaction with the "real" part of the world online that isn't made by and for software freedom enthusiasts/zealots will refuse to interact with the non-allowlisted software builds on my machine.
It's going to be the future NoTCPA et al. used to combat in the early 00s, and I really do dread it.
It seems like the sensible rule of thumb is: If your organization needs that level of control, it's on your organization to provide the device.
Suppose the course I've been studying for the past three years now uses $VideoService, but $VideoService uses remote attestation and gates the videos behind a retinal scan, ten distinct fingerprints, the last year's GPS history and the entire contents of my hard drive?¹ If I could spoof the traffic to $VideoService, I could get the video anyway, but every request is signed by the secure enclave. (I can't get the video off somebody else, because it uses the webcam to identify when a camera-like object is pointed at the screen. They can't bypass that, because of the remote attestation.)
If I don't have ten fingers, and I'm required to scan ten fingerprints to continue, and I can't send fake data because my computer has betrayed me, what recourse is there?
¹: exaggeration; no real-world company has quite these requirements, to my knowledge
1) The requirements themselves. These are different for consumer vs employee type scenarios. So general, I'd prefer we err on the side of DRM free for things like media, but there are legitimate concerns around things like data privacy when you are an employee of an organization handling sensitive data.
2) Presuming there are legitimate reasons to have strong validation of the user and untampered software, we have the choice of A) using only organization supplied hardware in those case or B) using your own with some kind of restriction. I'd much prefer to use my own as much as possible ... if I can be ensured that it won't spy on me, or limit what I can do, for the non-organization specific purposes I've explicitly opted-in to enable.
> I'm uncomfortable letting organisations have control over the software that runs on my hardware.
I'm not, if we can sandbox. I'm fine with organizations running javascript in my browser for instance. Or running mobile apps that can access certain data with explicit permissions (like granting access to my photos so that I can share them in-app). I think we can do better with both more granular permissions, better UX, and cryptographic guarantees to both the user and the organization that both the computation and data is operating at the agreed level.