If I could figure out how to build it safely I'd absolutely do that.
Wrote a bit more here but that is the gist: https://zero2data.substack.com/p/trusted-prompts
If an attacker can send enough tokens they can find a combination of tokens that will confuse the LLM into forgetting what the boundary was meant to be, or override it with a new boundary.