Unfortunately, Differential Privacy proofs can be used to justify applications which turn out to leak privacy when the proofs are shown to be incorrect after the fact, when the data is already out there and the damage already done.
Nevertheless, it is instructive just to see how perilously few queries can be answered before compromise occurs — putting the lie to the irresponsible idea of "anonymization".
You are right that some differential privacy proofs have later been found to be wrong. For example, there is an entire paper about bugs in initial versions of the sparse vector technique [1].
However, I imagine this will evolve the way cryptographic security has evolved: at some point, enough experts have examined algorithm X to be confident about its differential privacy proof; then some experts implement it carefully; and the rest of us use their work because "rolling [our] own" is too tricky.
If nothing else, I appreciate the Differential Privacy effort, if only to show the problem space is wicked hard.
I worked in medical records and protecting voter privacy. There's a lot of wishful thinking leading to unsafe practices. Having better models to describe what's what would be nice.
The reason is that you are thinking of an example that's not nicely compatible with differential privacy. The basic examples of DP would be something like a statistical query: approximately how many people gave Movie X three stars? You can ask a bunch of those queries, adding some noise, and be protected against re-identification.
You can still try to release a noisy version of the whole database using DP, but it will be very noisy. A basic algorithm (not good) would be something like
For each entry (person, movie):
with probability 0.02, keep the original rating
otherwise, pick a rating at random
(A better one would probably compute a low-rank approximation, then add small noise to that.)