It's very clear that OpenAI couldn't predict all of the ways users could interact with its model, as we quickly saw things like prompt discovery and prompt injections happening.
And so not only is it reasonable that OpenAI didn't know users would be able to retrieve snippets of training material verbatim, it's reasonable to say they weren't negligent in not knowing either. It's a new technology that wasn't meant to operate like that. It's not that different from a security vulnerability that quickly got patched once discovered.
Negligence is about not showing reasonable care. That's going to be very hard to prove.
And it's not like people started using ChatGPT as a replacement for the NYT. Even in a lawsuit over negligence, you have to show harm. I think the NYT will be hard pressed to show they lost a single subscriber.