(Microsoft is just as bad - their sales people can’t be bothered to talk to anyone who isn’t a partner, but that worked out great for me, I wasn’t really feeling azure and it made a great excuse to not consider them. One of their sales people did leave me a VM three or four months later but we had already chosen another vendor by then).
If I have an issue with Google, I might try starting an adwords campaign and ask to speak to supervisors when their sales calls comes through, and see if there's an in along the way of "we would spend more, but you see you've done X that needs to be resolved first".
My other approach - not tried it on Google, but it worked very well on DHL and Uber so far - is to sign up for LinkedIn's premium subscription and use that to Inmail a bunch of VPs/SVPs and set out my grievance. My experience so far is that you need to find someone high enough up to be under the illusion - from lack of customer contact - that everything is well. They often seem to be shocked to hear that customers hit the wall, and get approached rarely enough that it's a novelty for them to help out (as such, it'll probably stop working if everyone starts doing this...)
With DHL in particular I got an SVP to get his assistant to light a fire under the customer service operation by telling them said SVP wanted to be kept up to date on how it went, and Cc'ing said SVP and me on the e-mails. A package they "could do nothing about" because it was supposedly on a boat back to the US, magically appeared in my office one business day later after it was located in a depot 5 minutes from my office (I wish I could say that was the first time DHL has told me a package was somewhere completely different to where it actually was)
The latter parts of the story were when I was part of Common Crawl, a public good dataset that has seen a great deal of use. During my tenure there I crawled over 2.5 petabytes and 35 billion webpages mostly by myself.
I'd always felt guilty of a specific case as our crawler hit a big name web company (top N web company) with up to 3000 requests per second* and they sent a lovely note that began with how much they loved the dataset but ended with "please stop thrashing our cache or we'll need to ban your crawler". It was difficult to properly fix due to limited engineering resources and as they represented many tens / hundreds of thousands of domains, with some of the domains essentially proxying requests back to them.
Knowing Google hammered you at 120k requests per second down to _only_ 20k per second has assuaged some portion of that guilt.
[1]: https://state.smerity.com/smerity/state/01EAN3YGGXN93GFRM8XW...
* Up to 3000 requests per second as it'd spike once every half hour or hour when parallelizing across a new set of URL seeds but would then decrease, with the crawl not active for all the month