Assistive technologies will still work as the browsers implement platform's assistive APIs.
Automatic testing will still work because a developer isn't going to add restrictions to their own tests from their site. Unless they are testing if a captcha gets shown from an unsafe environment.
Archives, search engines, and spiders should already be respecting robots.txt. Site owners can already block those things if they don't want their site crawled.
>This means that no single party decides which form-factors, devices, operating systems, and browsers may access the Web.
The proposal allows anyone to become an attestor. There would not be a single attestor who you would have to prove your trustworthiness to.
Websites aren't the only party that might want automated tests.
> Archives, search engines, and spiders should already be respecting robots.txt. Site owners can already block those things if they don't want their site crawled.
robots.txt is not law. Archives, search engines, and spiders SHOULD ignore it in cases where they deem it the more moral action. After all, all of these are supposed to snapshot the web that humans see.