The exceptions will be like, pictures of a specific city's skyline. Not because it's copying a particular image, but because that's what that city's skyline looks like, so that's how it looks in an arbitrary picture of it. But those are the pictures that lack original creativity to begin with -- which is why the pictures in the training data are all the same and so is the output.
And people seem to make a lot of the fact that it will often reproduce watermarks, but the reason it does that isn't that it's copying a specific image. It's that there are a large number of images of that subject with that watermark. So even though it's not copying any of them in particular, it's been trained that pictures of that subject tend to have that watermark.
Obviously lawyers are going to have a field day with this, because this is at the center of an existing problem with copyright law. The traditional way you show copying is similarity (and access). Which no longer really means anything because you now have databases of billions of works, which are public (so everyone has access), and computers that can efficiently process them all to find the existing work which is most similar to any new one. And if you put those two works next to each other they're going to look similar to a human because it's the 99.9999999th percentile nearest match from a database of a billion images, regardless of whether the new one was actually generated from the existing one. It's the same reason YouTube Content ID has false positives -- except that its database only includes major Hollywood productions. A large image database would have orders of magnitude more.