There is a precedent: There were some exploit prompts that could be used to get ChatGPT to emit random training set data. It would emit repeated words or gibberish that then spontaneously converged on to snippets of training data.
OpenAI quickly worked to patch those and, presumably, invested energy into preventing it from emitting verbatim training data.
It wasn’t as simple as asking it to emit verbatim articles, IIRC. It was more about it accidentally emitting segments of training data for specific sequences that were semi rare enough.
2. OpenAI's "patch" for that was to use their content moderation filter to flag those types of requests. They've done the same thing for copyrighted content requests. It's both annoying because those requests aren't against the ToS but it also shows that nothing has been inherently "fixed". I wouldn't even say it was patched.. they just put a big red sticker over it.