If a person with a very good memory reads an article, they only violate copyright if they write it out and share it, or perform the work publicly. If they have a reasonable understanding of the law they won't do so. However a malicious person could absolutely trick or force them to produce the copyrighted work. The blame in that case however is not on the person who read and recited the article but on the person who tricked them.
That distinction is one we're going to have to codify all over again for AI.
Why? If I steal a bunch of unique works of art and store them in my house for only me to see, am I still committing a crime?
Of course, OpenAI and most other "AI" aren't affairs "inside the home"; they are affairs publicly demonstrated far and wide.
But if you simply copied the unique works and stored them, nobody would care. If you then tried to turn around and sell the copies, well, the artist is probably dead anyway and the art is probably public domain, but if not, then yeah it'd be copyright infringement.
If you only copied tiny parts of the art though, then fair use examinations in a court might come into play. It just depends on whether they decide to sue you, like NYT did in this case, while millions of others did not (or just didn't have the resources to).
Is that really true? Also, what if the second person is not malicious? In the example of ChatGPT, the user may accidentally write a prompt that causes the model to recite copyrighted text. I don't think a judge will look at this through the same lens as you are.
The general prescription (that I do agree not everyone accepts) society has come up with is we relegate control of some of these weapons to governments and outright ban others (like chemical weapons, biological weapons, and such) through treaties. If LLMs can cause so much damage and their use can be abused so widely, you have to stop focusing on questions about whether a user is culpable or not and move to consider whether their wide use is okay and shouldn't be controlled.