The distribution of the bytes matters a bit here. In theory the model could be over trained against one copyrighted work such that it is almost perfectly preserved within the model.
>>forgot+(OP)
You can see this with the Mona Lisa. You can get pretty close reproductions back by asking for it (or at least you could in one of the iterations). Likely it overfit due to it being such a ubiquitous image.