* Legal stuff (eg. some algorithm detected child porn in his account, is an employee legally allowed to look at it to confirm the algorithm was correct? no.)
* Internal Politics (eg. one team has found this account DoSing their service, while the account is perfectly normal in all other ways, but due to Googles systems being so complex a single-service ban is very hard to implement)
* GDPR/Privacy laws (The law requires the deletion of no-longer needed data. As soon as his account gets banned, the data is no longer needed for Googles business purposes (of providing service to him), so the deletion process can't be delayed.
* Stolen/shared accounts. All it takes is one evil browser extension to steal your user account cookie and go on a spamming spree. Figuring out how it happened is near impossible (user specific logs are anonymized). Usually just resetting the users logins doesn't solve it because the malware is still on the users computer/phone and will steal the cookie again.
* Falsely linked accounts. Some spammers create gmail addresses to send spam, but to disguise them they link lots of real peoples accounts for example via using someone elses recovery phone number, email address, contacts/friends, etc. In many cases they will compromise real accounts to create all these links, all so that as many real users as possible will be hurt if their spamming network is shutdown.
* Untrustable employees. Google tries not to trust any employee with blanket access to your account. That means they couldn't even hire a bunch of workers to review these accounts - without being able to see the account private data, the employee wouldn't be able to tell good from bad accounts.
* Attacks on accounts. There are ways for someone who doesn't like you to get a Google account banned. Usually there are no logs kept (due to privacy reasons) that help identify what happened. Example method: Email someone a PDF file containing an illegal image, then trick them into clicking "save to drive". The PDF can have the image outside the border of the page so it looks totally normal.
Yes, it's solvable, and Google should put more effort into it, but it's hard to do.
Child porn detection and enforcement literally does not work that way. I'm not sure how you even think that would work. How do you think the algorithm gets trained? Humans feed data into it. All the major social media companies (Facebook, etc) have paid human moderators that have to screen flagged content in many cases to determine whether it is illegal and then escalate to the relevant staff or authorities, and in some cases this is a legal requirement.
The GDPR one is especially ridiculous. Why would you be required to delete a user's data the moment you suspend their account? That's utterly absurd, it completely eliminates the user's recourse in the event of an error. No reasonable human being would interpret the laws that way and the relevant regulators (yes, GDPR is enforced by humans) would never require you to do that.
Google already has measures to deal with malware on machines, typically temporary or permanent bans of the hardware and/or IP address. They don't have to permanently delete your gmail account to lock out Chrome on a single malwared PC. If you've ever done any automation or browsed on a shared network you've probably seen Google Search throw up the 'automated traffic' warning and block you for a bit.
Being able to review conduct of an account (i.e. browse logs) is not "blanket access to your account" and neither is being able to examine the details on why the account was banned and reverse them. The account owner could also authorize the employee to access their data - any time you talk to a Customer Service representative for a company, you're doing this.
Normally that happens to me when I start to adjust my query to get Google to do what it used to do.