It's an "I don't want to wake up to all our stuff running only on the backup provider because cloudflare shut us down for seemingly no reason with no warning".
It's avoiding unnecessary alerts and triage for the ops team by snipping an apparent liability from the stack. I've already done the same after seeing a few of these kinds of interactions with cloudflare in the R2 discord.
When I see a blog post detailing why this has been happening so often, and what they've done to fix it, I'll happily pull that infra code out of the mothballs.
Every single one of the cloud providers has had instances of this kind of problem. It's somewhat an inevitability of the way they all work. Eventually someone triggers an automated system somewhere and gets taken down. Or has outages that they shouldn't have had.
Better cloudflare where the CTO hangs out on HN, than Google where both the ban and the appeal are not even humans with empathy.