Like most adversarial attacks, they get more perceptible as they try to be robust to more transformations of the data (both in practice, i.e. applied to a level that's non-trivially removable, tend to make images look like slightly janky AI, ironically), and they are specific to the net(s) they are targeting, so it's more of a temporary defense against the current generation then a long-term protection.