At the protocol (or disk, etc) boundary. If I write code that consumes bytes that are intended to be UTF-8, I need to make a choice about what to do if they aren’t UTF-8 somewhere. A strict UTF-8 string forces me to make that choice in a considered location. In a language where a “string” is just bytes, I can forget, or to pieces of code can disagree on what the contract is. And bugs result.
Check out MySQL for a fun example of getting this wildly, impressively wrong. At least a Rust or a type checked-Python 3 wrapper around some MySQL code enforces a degree of correctness, which is much better than having your transaction fail to commit or commit indirectly was down the stack when you get bytes you didn’t expect.
(MySQL can still reject strictly valid UTF-8 data for utterly pathetic historical reasons if you configure it incorrectly.)